Dynamically Weighted Majority Voting for Incremental Learning and Comparison of Three Boosting Based Approaches

Size: px
Start display at page:

Download "Dynamically Weighted Majority Voting for Incremental Learning and Comparison of Three Boosting Based Approaches"

Transcription

1 Proceedngs of Inernaonal Jon Conference on Neural Neworks, Monreal, Canada, July 3 - Augus 4, 2005 Dynamcally Weghed Majory Vong for Incremenal Learnng and Comparson of Three Boosng Based Approaches Alasgar Gangardwala Elecrcal and Compuer Engneerng Rowan Unversy Glassboro, NJ USA gangar34@sudens.rowan.edu Rob Polkar Elecrcal and Compuer Engneerng Rowan Unversy Glassboro, NJ USA polkar@rowan.edu Absrac - We have prevously nroduced Learn++, an ensemble based ncremenal learnng algorhm for acqurng new knowledge from daa ha laer become avalable, even when such daa nroduce new classes. In hs paper, we descrbe a modfcaon o hs algorhm, where he vong weghs of he classfers are updaed dynamcally based on he locaon of he es npu n he feaure space. The new algorhm provdes mproved performance, sronger mmuny o caasrophc forgeng and fner balance o he sably-plascy dlemma han s predecessor, parcularly when new classes are nroduced. The modfed algorhm and s performance, as compared o Adaboos.M and he orgnal Learn++, on real and benchmark daases are presened. I. INTRODUCTION A. Incremenal Learnng Supervsed classfers are effecve and powerful learnng ools for paern recognon and machne learnng applcaons. As mos machne learnng and paern recognon professonals are panfully aware of, however, he generalzaon performance of any learnng algorhm reles heavly on adequae and represenave ranng daa. Snce daa collecon s an expensve, me consumng and a edous process for mos praccal applcaons, such daa are ofen acqured n small baches over me. Wang for he enre daase o be avalable for ranng may prove neffecve, uneconomcal and nflexble. In such cases, would be more desrable o ran a classfer on avalable daa and ncremenally updae he classfer as new daa become avalable, whou compromsng he performance on prevously learned daa. Learnng new daa ncremenally whou forgeng prevously acqured knowledge rases he ssue of sablyplascy dlemma []: acqurng new knowledge requres plascy, whereas reanng prevously acqured knowledge requres sably. The challenge s hen o acheve a meanngful balance beween hese wo conflcng properes. Many of he commonly used supervsed classfers such as mullayer percepron (MLP), radal bass funcon, probablsc neural neworks, ec. are very sable classfers, unable o learn new nformaon. The praccal approach generally aken wh hese classfers for ncremenal learnng s o dscard he prevously raned classfer, combne and use he enre ranng daa accumulaed hus far o creae a new classfer from scrach. Ths approach effecvely causes he prevously learned nformaon o be enrely los, a phenomenon known as caasrophc forgeng [2,3]. For he purpose of hs work, we defne an ncremenal learnng algorhm as one ha has: () he capably of learnng novel nformaon conen from consecuve daases whou requrng access o prevously used daa; (2) he capably of reanng prevously learned knowledge; and (3) he ably o learn new classes nroduced by new daases. Learn++, based on weghed majory vong of an ensemble of classfers, sasfes he above lsed crera for ncremenal learnng, ye ressan o aforemenoned drawbacks [4, 5, 6]. In essence, he dea s o generae an ensemble of classfers wh he nal daa, and generae addonal classfers as new daases are acqured. We have recenly noced ha he way n whch he vong weghs are assgned o classfers based on her performance durng ranng s subopmal, because hese weghs are se durng ranng and reman consan hereafer. A dynamc approach ha assgns vong weghs o classfers based on he esmaed performance of each classfer on ha nsance may be more opmal. B. Ensemble of Classfers and Weghed Majory Vong Ensemble approaches have drawn much neres snce Hansen and Salamon s semnal work [7]. In essence, a group of classfers are raned usng dfferen dsrbuons of ranng samples, and oupus of hese classfers are hen combned n some manner o oban he fnal classfcaon rule. Learn++ uses he synergsc power of such an ensemble for ncremenal learnng of novel conen provded by consecuve daases. The algorhm was nspred from Freund and Schapre s adapve boosng (Adaboos.M) algorhm [8], whch also was orgnally proposed for mprovng he performance of weak classfers. I s based on he weghed majory vong [9] of hypoheses ha are generaed by sequenally ranng a se of weak classfers on dfferen ds /05/$ IEEE 3

2 rbuons of he ranng daa. Usng weak classfers allow dfferen classfers o make dfferen errors, a combnaon of whch hrough weghed majory vong hen effecvely averages ou he ndvdual errors resulng n a sronger classfer wh a much mproved generalzaon performance. The orgnal verson of Learn++ followed he AdaBoos approach n deermnng vong weghs, whch were assgned durng ranng dependng on he classfers performance on her own ranng daa. Whle hs approach makes perfec sense when he enre daa come from he same daabase, does have a handcap when used n an ncremenal learnng seng: snce each classfer s raned o recognze (slghly) dfferen porons of he feaure space, classfers performng well on a regon represened by her ranng daa may no do so well when classfyng nsances comng from dfferen regons of he space. Therefore, assgnng vong weghs prmarly on he ranng performance of each classfer s subopmal. Esmang he poenal performance of a classfer on a es nsance usng a sascal dsance merc, and assgnng vong weghs based on hese esmaes may be more opmal. In hs paper, we presen a modfed verson of Learn++ along wh s smulaon resuls as compared o he orgnal Learn++ o show he mprovemen on generalzaon performance and sably n ncremenally learnng new nformaon conen. Also new n hs sudy, we evaluae AdaBoos for ncremenal learnng and compare boh versons of Learn++. We show ha, whle no orgnally nended for such applcaons, AdaBoos s capable of ncremenal learnng, albe wh a lower performance, effcency and sably hen eher versons of Learn++. Overvews of oher ensemble and classfer combnaon echnques can be found n [0~5] and references whn. II. LEARN++ WITH DYNAMICALLY UPDATED VOTING WEIGHTS Learn++, smlar o AdaBoos, generaes an ensemble of weak classfers by choosng a subse of he ranng daa from he daabase usng an eravely updaed wegh dsrbuon rule, no o be confused wh he vong weghs. Learn++, however, combnes all classfers a each eraon o oban a compose hypohess before updang he dsrbuon. The dsrbuon updae rule of Learn++ s hen based on he performance of hs compose hypohess, whch represens he performance of he enre ensemble ha has been generaed hus far. The dsrbuon weghs of hose nsances ha are correcly denfed by he ensemble are reduced. Ths dsrbuon updae rule s desgned specfcally o accommodae ncremenal learnng of addonal daases, especally hose ha nroduce prevously unseen classes. The pseudocode of he algorhm s gven n Fg.. For each daabase D k, k=,,k ha becomes avalable, he npus o Learn++ are: () labeled ranng daa S k ={[(x, y )], =,,m k }, where x s he ranng nsance and y s he correc label; (2) a weak learnng algorhm BaseClassfer; and (3) an neger T k, he oal number of weak classfers o be generaed. The BaseClassfer can be any supervsed algorhm ha can oban a mnmum of 50% classfcaon performance on ranng daa, ensurng he classfer s relavely weak, ye reasonably srong o have a meanngful performance. Usng a weak classfer has he addonal advanage of rapd learnng, snce he me-consumng fne-unng sep, whch could poenally cause overfng, s avoded. Unless here s compellng reason o choose oherwse, he dsrbuon weghs are nalzed o be unform, so ha all nsances have he same probably of beng seleced no he frs ranng subse. If k> (ha s, new daabase has been nroduced), a dsrbuon nalzaon sequence renalzes he daa dsrbuon (he If block n Fg. ) based on he performance of he curren ensemble on he new daa. A each eraon, he dsrbuon D s obaned by normalzng he weghs w of he nsances based on her ndvdual classfcaon by he curren ensemble (sep ). m k D = w w () () = The ranng daase S k s dvded no a ranng (TR ) and a esng subse (TE ) accordng o D (sep 2). Learn++ han calls BaseClassfer (sep 3) and rans wh TR o generae a weak hypohess h. The error of hs hypohess s calculaed on he curren ranng daa S k by addng he dsrbuon weghs of he msclassfed nsances (sep 4) [ h ( x ) y ] ε D = D (2) = : h y = where [ ] s f he predcae s rue, and 0 oherwse. If ε>/2, curren h s deemed oo weak, and s replaced wh a new h generaed from a fresh se of TR and TE. If ε</2, he curren hypohess s added o he prevous hypoheses and all hypoheses generaed durng he prevous eraons are hen combned usng he weghed majory vong o consruc he compose hypohess H (sep 5). In orgnal Learn++, he vong weghs were calculaed based on he error ε, so ha hypoheses wh lower error were gven hgher weghs, resulng n classes predced by hese hypoheses o be weghed more heavly. Snce he hypohess weghs are assgned pror o esng based on her ndvdual ranng performance, hs wegh assgnmen s subopmal. Ths s because, hypoheses are raned wh dfferen (and possbly overlappng) porons of he feaure space, and may no be reasonable o expec a classfer o perform well on es nsances ha may come from dfferen porons of he feaure space. Ths s no lkely o be a major ssue when only a sngle daabase s used (as n AdaBoos); however, s a vald concern n an ncremenal learnng seng. A more opmal rule would be o dynamcally esmae whch hypoheses are more lkely o correcly classfy any gven nsance and gve hem hgher vong weghs accordngly. Therefore, we modfy he expresson for compose hypoheses, represenng ensemble decson, as 32

3 Inpu: For each daase drawn from D k k=,2,,k Sequence of m k examples S k = {( x, y ) =,, } Weak learnng algorhm BaseClassfer. Ineger T k, specfyng he number of eraons. Do for each k=,2,,k: Inalze w = D =,, =,2,, If k>, Go o Sep 5, evaluae curren ensemble on new daa se D k, updae wegh dsrbuon; End If Do for =,2,...,T k : m. Se D = w w() so ha D s a dsrbuon. = 2. Draw ranng TR and esng TE subses from D. 3. Call BaseClassfer o be raned wh TR. 4. Oban a hypohess h and calculae s error ε = D on S k. : h ( x ) y If ε > ½, dscard h and go o sep Call dynamcally weghed majory vong (DWMV) o oban he compose hypohess H = arg max DW ( x ) : h 6. Compue he error of he compose hypohess E = D = D [ H ( x ) y ] : H ( x ) y = If E > ½, dscard H and go o sep Se B = E /(-E ), and updae he weghs: B, f H ( x ) w + ( ) = w, oherwse [ H ( ) ] ( ) x y = w B End Call DWMV and oupu he fnal hypohess: K H fnal( x ) = argmax DW k= : h Fg.. Pseudocode for he modfed Learn++ algorhm H = arg max DW ( x ) : h where DW (x) s he dynamc wegh assgned o hypohess h for he nsance x. As descrbed below, dynamc weghs are deermned by usng Mahalanobs-dsance based esmaed lkelhood of h o correcly classfy he nsance x. The compose error E made by H, ha s, he performance of he enre ensemble consruced hus far, s hen deermned by summng up he dsrbuon weghs of all nsances msclassfed by he ensemble (sep6). E = D = D [ H ( x ) y ]. : H ( x ) y = Fnally, he compose normalzed error s deermned as (3) (4) B E E ), 0< E < and 0< B < (5) = ( 2 and dsrbuon weghs are updaed accordng o he ensemble performance (sep 7). B, f H ( x ) w + ( ) = w, oherwse [ H ( ) ] ( ) x y = w B. Ths expresson reduces he weghs of hose nsances correcly classfed by he compose hypohess H, by a facor of B, whle he weghs of he msclassfed nsances are kep unchanged. A + s eraon, afer normalzaon of he weghs n sep, he probably of choosng prevously correcly classfed nsances for TR + s reduced, whle ha of msclassfed nsances s effecvely ncreased. Ths would be a logcal place o pause and pon ou o some of he man dfference beween AdaBoos and Learn++. The dsrbuon updae rule n AdaBoos s based on he performance of he prevous hypohess [8], whch focuses he algorhm on dffcul nsances wh respec o dfferen samplng of a gven sngle daabase, whereas ha of Learn++ s based on he performance of he enre ensemble [4], whch focuses hs algorhm on nsances ha carry novel nformaon wh respec o consecuve daabases. Ths becomes parcularly crcal when new daabase nroduces nsances from a prevously unseen class. Snce none of he prevous classfers n he ensemble has seen he nsances from he new class, H nally msclassfes hem, forcng he algorhm o focus on hese nsances ha carry novel nformaon. The procedure would no work nearly as effcenly, however, f he wegh updae rule were based on he performance of h only (as AdaBoos does) nsead of H. Ths s because he ranng performance of he frs h on nsances from he new class s ndependen of ha of he prevously generaed classfers. Therefore, h s lkely o correcly classfy new class nsances ha has jus seen, bu only a he me hey are frs nroduced. Ths would cause he algorhm o focus on oher dffcul o learn nsances, such as oulers, raher hen he nsances wh novel nformaon conen. Once T k hypoheses are generaed for each daabase D k, he fnal hypohess H fnal can be obaned by combnng all hypoheses by dynamcally weghed majory vong, choosng he class ha receves he hghes oal voe among all hypoheses: H fnal K k= : h (6) ( x ) = arg max DW. (7) The nuon n usng dynamcally updaed vong weghs s as follows: f we knew whch hypoheses would perform bes ahead of me, we would gve hose hypoheses hgher weghs. We canno have hs nformaon a pror, 33

4 however, we can esmae whch classfers are more lkely o correcly denfy a gven nsance based on he locaon of ha nsance n he feaure space wh respec o he nsances used o ran ndvdual classfers. If an nsance s spaally close n a dsance merc sense o he ranng daa used o ran a classfer, hen s reasonable o expec ha ha classfer wll perform well on he gven nsance. We use he class-specfc Mahalanobs dsance merc o compue he dsance beween he ranng daa and he unknown nsance for each classfer. Classfers whose ranng daase are closer o he unknown nsance are weghed hgher. We noe ha prevously seen daa need no be sored n order o compue he desred dsances, bu only he means and covarance marces of he ranng ses. We formalze he compuaon of hese weghs as follows: Le us defne TR c as he subse of TR, he ranng daase used durng he h eraon, o nclude only hose nsances ha belongs o class c, ha s, TR C c = { x x TR & y = c} TR = TRc (8) c= where C s he oal number of classes. Class-specfc Mahalanobs dsance s hen compued as, T Mc ( x ) = ( x mc) Cc ( x mc), c =,.., C (9) where m c s he mean and C c s he covarance merc of TR c. For any nsance x, he Mahalanobs dsance based dynamc wegh of he h hypohess s hen compued as DW ( x ) =, c =,.., C; =,..., T mn( Mc( x)) (0) where T s he oal number of hypoheses generaed. The Mahalanobs dsance mplcly assumes ha he underlyng daa dsrbuon s Gaussan, whch n general s no he case. Ye s more nformave hen oher dsance mercs as akes he daa covarance no consderaon, and provdes promsng resuls demonsrang s effecveness. III. SIMULATION RESULTS In hs paper, we presen smulaon resuls of Learn++ wh dynamc vong wegh updae along wh Learn++ and Adaboos.M on one real world and wo benchmark daases. All resuls are gven as 95% confdence nerval obaned hrough 0-fold cross valdaon. To smulae ncremenal learnng, he ranng s done n sessons, where only he mos recenly avalable daabase s shown o he algorhm durng he curren ranng sesson (TS). A. Volale Organc Compounds (VOC) Daabase Ths daabase was generaed from responses of sx quarz crysal mcrobalances (QCMs) o varous concenraons of fve volale organc compounds, Ehanol (ET), Ocane (OC), Toluene (TL), Xylene (XL) and Trchloroehylene (TCE). The daabase was paroned no four ses, S ~S 3 for ranng, where each se nroduces one new class, and TEST for valdaon. The daa dsrbuon s shown n Table. The base classfer used for all hree algorhms was a sngle layer MLP wh jus enough hdden layer nodes and a raher oleran error goal o make a reasonably weak classfer for hs daabase. Tables 2, 3, and 4 llusrae he percen ranng and generalzaon performances of Learn++ wh dynamcally updaed vong weghs (DUVW), orgnal Learn++ and Adaboos.M, respecvely, on VOC daa, afer each ranng sesson, TS ~TS 3. TABLE. DATA DISTRIBUTION FOR VOC DATABASE Daase ET OC TL TCE XL S S S TEST TABLE 2. DUVW- LEARN++ PERFORMANCE ON VOC DATA Daase TS TS 2 TS 3 S 99.37~ ~ ~87.47 S ~ ~90.4 S ~94.78 TEST 59.7~ ~ ~88.60 TABLE 3. ORIGINAL LEARN++ PERFORMANCE ON VOC DATA Daase TS TS 2 TS 3 S 99.9~ ~ ~8.93 S ~ ~93.49 S ~95.68 TEST 6.70~ ~ ~88.46 TABLE 4. ADABOOST.M PERFORMANCE ON VOC DATA Daase TS TS 2 TS 3 S ~ ~77.88 S ~ ~84.30 S ~94.87 TEST 6.2~ ~ ~82.58 Whle all algorhms acheved ncremenal learnng, Learn++ wh dynamcally updaed vong weghs performed bes, jus slghly beer han he orgnal verson of Learn++, and sgnfcanly beer han Adaboos.M. I s also worh nong ha he confdence nerval of he modfed Learn++ was also narrower han ha of s predecessor, ndcang less varably and ncreased sably n he performance of he modfed algorhm. Tables 2~4 also show some declne n ranng performances over hree ranng sessons (on daases S ~S 3 ). Ths s expeced due o sably-plascy dlemma. We noe, however ha he loss of prevously acqured knowledge as measured by ranng daa performance s much less n he modfed Learn++ hen s n ohers. B. Wsconsn Breas Cancer (BC) Daabase Ths daabase, orgnally creaed a The Unversy of Wsconsn, Madson [6], was obaned from he UCI repos- 34

5 ory [7]. The daabase consss of nne feaures and a oal of 683 nsances from wo classes of breas umors: bengn and malgnan. The daa dsrbuon s shown n Table 5. The base classfer was agan a MLP ype neural nework wh smlar characerscs as descrbed earler. Tables 6~8 presen he 0-fold cross valdaon percen ranng and generalzed performances. TABLE 5. DATA DISTRIBUTION FOR BC DATA Daase Bengn Malgnan S S TEST TABLE 6. DUVW LEARN++ PERFORMANCE ON BC DATA Daase TS TS 2 S 93.97~ ~96.8 S ~95.76 TEST 94.82~ ~98.4 TABLE 7. ORIGINAL LEARN++ PERFORMANCE ON BC DATA Daase TS TS 2 S 94.7~ ~96.44 S ~96.08 TEST 96.34~ ~98.7 TABLE 8. ADABOOST.M PERFORMANCE ON BC DATA Daase TS TS 3 S 94.72~ ~97.57 S ~95.63 TEST 94.95~ ~98.5 The ranng and generalzaon performances n Tables 6~8 ndcae ha all hree algorhms do equally well n learnng addonal nformaon f no new classes are nroduced. In hs applcaon, all algorhms have acqured mos of her knowledge from S durng he frs ranng sesson, however, hey were sll able o exrac ncremenal amoun of new knowledge from he second daase, S 2. The performance dfference beween he modfed Learn++, orgnal Learn++ and AdaBoos.M become less sgnfcan under such scenaros, where no new classes are nroduced, or no subsanal novel conen s provded wh he new daabase. Ths s expeced, as he man dfference beween he orgnal Learn++ and AdaBoos.M s he dsrbuon updae rule ha s geared owards learnng new classes. Smlarly, he modfcaon wh he dynamc vong weghs becomes more meanngful when dfferen daases cover subsanally dfferen porons of he feaure space, whch happens more drascally when eher a new class s nroduced, or he new nsances carry subsanal amoun of novel nformaon conen. C. Vehcle Slhouees Daabase Vehcle daabase was also obaned from he UCI reposory [7]. Ths daabase consss of 8 feaures n 946 nsances from four vehcle classes. Ths daabase s known o be challengng daabase, as ypcal performances on varous algorhms on hs daabase has reporedly been around 65~75% on non-ncremenal learnng [7]. The vehcle daabase was dvded no hree ranng daase S ~ S 3 and one es daase, TEST. The daa dsrbuon s shown n Table 9, whch was specfcally based owards new classes. Tables 0~2 summarze 0-fold cross valdaon percen ranng and generalzaon performances of Learn++ wh dynamc vong wegh updae, Learn++ and Adaboos.M, respecvely afer each ranng sesson, TS ~ TS 3. The base classfer used was agan a sngle layer MLP ype neural nework, wh smlar characerscs as descrbed above. TABLE 9. DATA DISTRIBUTION FOR THE VEHICLE DATABASE Daase Opel Saab Bus Van S S S TEST TABLE 0. DUVW LEARN++ PERFORMANCE ON VEHICLE Daase TS TS 2 TS 3 S 88.66~ ~ ~79.68 S ~ ~73.66 S ~87.43 TEST 47.00~ ~ ~75.46 TABLE. ORIGINAL LEARN++ PERFORMANCE ON VEHICLE Daase TS TS 2 TS 3 S 89.60~ ~ ~76.78 S ~ ~64.03 S ~86.92 TEST 47.8~ ~ ~73.20 TABLE 2. ADABOOST.M PERFORMANCE ON VEHICLE Daase TS TS 2 TS 3 S 67.20~ ~ ~83.2 S ~ ~52.88 S ~80.90 TEST 35.37~ ~ ~63.48 The generalzaon (TEST) performances n Tables 0~2 ndcae ha he modfed Learn++ has ouperformed oher wo algorhms boh n performance and n he confdence nerval of he performance. Based on 0-fold cross valdaon, he generalzaon performance of he modfed Learn++ was n 72~75% range, compared o 68~73% for orgnal Learn++ and 52~63% for AdaBoos.M. Furhermore, he modfed Learn++ places self much more favorably along he plascy sably specrum, as was able o rean sgnfcanly more of s prevously acqured knowledge hen he oher algorhms. 35

6 IV. DISCUSSION AND CONCLUSIONS In hs paper, we presened a modfed approach o weghed majory vong rule, where he classfers are weghed dynamcally for each nsance, dependng upon he esmaed lkelhood of he hypoheses o correcly classfy he unknown nsance. The nuve dea behnd hs approach s ha he classfer whose ranng daase s closes o he gven nsance, has more nformaon abou ha parcular nsance and herefore s more lkely o classfy ha nsance correcly. Smulaon resuls ndcae ha all hree algorhms are capable of ncremenal learnng; however, he resuls were mos favorable and promsng for he modfed Learn++ usng dynamcally updaed vong weghs. We noe ha he generalzaon performances obaned by he modfed Learn++ durng ncremenal learnng were very smlar, f no beer, hen he generalzaon performances obaned by several oher algorhms on hese daases when used n a non-ncremenal learnng seng as repored n [6]. Learnng n a non-ncremenal seng allows he enre daa o be made avalable o he algorhm a once, whch s a much smpler problem. The modfed Learn++ algorhm exhbed no only a beer generalzaon performance, bu also a sgnfcanly narrower confdence nerval. The mproved confdence nerval s n fac worh aenon. Ths s because a narrower confdence nerval ndcaes mproved sably and robusness, quales of consderable concern n ncremenal learnng. In parcular, mproved generalzaon performance coupled wh a narrower confdence nerval s a sasfyng oucome, snce hs combnaon places he modfed Learn++ very favorably on he sably-plascy specrum. We also run all algorhms mulple mes under several oher scenaros, such as changng he order n whch he ranng daa are presened, and changng he base classfer ranng parameers (such as number of hdden layer nodes, error goal, ec.). We have found ou ha all algorhms are robus o he order n whch he daases are presened, as well as o reasonable modfcaons n he ranng parameers. Also, none of he algorhms suffer from caasrophc forgeng, snce prevously generaed classfers are reaned. Loss of some nformaon s nevable due o he sablyplascy dlemma whle new nformaon s beng learned. However, hs loss of prevously acqured knowledge was very margnal wh he modfed Learn++, bu mos promnen wh AdaBoos, when he daa nroduced sgnfcan amoun of novel nformaon conen, such as a new class. We conclude by resang ha n applcaons where he addonal nformaon conen s mnmal, he performance dfferences beween he algorhms become less sgnfcan. The promsng resuls of he modfed Learn++ wh dynamcally updaed vong weghs are mos meanngful and benefcal when he algorhm s used under he scenaros for whch s specfcally desgned, ha s, when he addonal daa provde sgnfcan novel nformaon conen. ACKNOWLEDGEMENT Ths maeral s based upon work suppored by he Naonal Scence Foundaon under Gran No. ECS , CAREER: An Ensemble of Classfers Approach for Incremenal Learnng. REFERENCES [] S. Grossberg, Nonlnear neural neworks: prncples, mechansms and archecures, Neural Neworks, vol., no., pp. 7-6, 988. [2] M. McCloskey and N. Cohen, Caasrophc nerference n connecons neworks: he sequenal learnng problem, n The Psychology of Learnng and Movaon, G.H. Bower, ed., vol. 24, pp , Academc Press, San Dego, 989. [3] R. French, Caasrophc forgeng n connecons neworks, Trends n Cognve Scences, vol. 3, no.4, pp , 999. [4] R. Polkar, L. Udpa L, S. Udpa, and V. Honavar, Learn++: An ncremenal learnng algorhm for supervsed neural neworks, IEEE Trans. on Sysem, Man and Cybernecs (C), vol. 3, no. 4, pp , 200. [5] R. Polkar, J. Byorck, S. Krause, A. Marno, and M. Moreon, Learn++: A classfer ndependen ncremenal learnng algorhm for superv. neural new., Proc. of In. Jon Conf. on Neural Neworks., vol. 2, pp , Honolulu, HI, [6] R. Polkar, L. Udpa L, S. Udpa, and V. Honavar, An ncremenal learnng algorhm wh confdence esmaon for auomaed denfcaon of NDE sgnals, IEEE Trans.Ulraso., Ferro., Freq. Con., vol. 5, no. 8, pp , [7] L. Hansen and P. Salamon, Neural nework ensembles, IEEE Trans. on PAMI, vol. 2, no. 0, pp , 990. [8] Y. Freund and R. Schapre, A decson heorec generalzaon of onlne learnng and an appl. o boosng, Comp. and Sysem Scences, vol. 57, no., pp. 9-39, 997. [9] N. Llesone and M. Warmuh, Weghed majory algorhm, Infor. and Compu., vol. 08, pp , 994. [0] M.I. Jordan and R.A. Jacobs, Herarchcal mxures of expers and he EM algorhm, Neural Compuaon, vol. 6, no. 2, pp. 8-24, 994. [] J. Kler, M. Haef, R.P. Dun, J. Maas, On combnng classfers, IEEE Trans. on Paern Analyss and Machne Inellgence, vol. 20, no.3, pp , 998. [2] L. Breman, Combnng predcors, Combnng Arfcal Neural Nes, A.Sharkey, ed., pp. 3-50, NY: Sprnger 999. [3] T. Deerch, Ensemble mehods n machne learnng, Proc. s In. Wkshop on Mul. Class. Sys., LNCS, J. Kler, F. Rol, ed., vol. 857, pp.-5, NY, Sprnger [4] J. Ghosh, Mulclassfer sysems: back o he fuure, 3rd In. Work. on Mul. Classfer Sys., LNCS (J. Kler & F. Rol, eds), vol. 2364, p. -5, NY: Sprnger, [5] T. Wndea and F. Rol (eds), In Proc. 4h In. Workshop on MCS, LNCS, vol. 2709, NY, Sprnger, [6] W. H. Wolberg and O.L. Mangasaran, Mulsurface mehod of paern separaon for medcal dagnoss appled o breas cyology, Proc. of he Naonal Academy of Scences, vol. 87, pp , 990. [7] C.L. Blake and C.J. Merz, Unv. of Calforna, Irvne, Reposory of Machne Learnng Daabases a Irvne, CA. 36

Confidence Estimation Using the Incremental Learning Algorithm, Learn++

Confidence Estimation Using the Incremental Learning Algorithm, Learn++ Confdence Esmaon Usng he Incremenal Learnng Algorhm, Learn++ Jeffrey Byorck and Rob Polkar Elecrcal and Compuer Engneerng, Rowan Unversy, 136 Rowan Hall, Glassboro, NJ 08028, USA. byor4610@sudens.rowan.edu,

More information

Ensemble of Classifiers Based Incremental Learning with Dynamic Voting Weight Update

Ensemble of Classifiers Based Incremental Learning with Dynamic Voting Weight Update Ensemble of Classfers Based Incremenal Learnng wh Dynamc Vong Wegh Updae Rob Polkar, Sefan Krause and Lyndsay Burd Elecrcal and Compuer Engneerng, Rowan Unversy, 36 Rowan Hall, Glassboro, NJ 828, USA.

More information

Introduction to Boosting

Introduction to Boosting Inroducon o Boosng Cynha Rudn PACM, Prnceon Unversy Advsors Ingrd Daubeches and Rober Schapre Say you have a daabase of news arcles, +, +, -, -, +, +, -, -, +, +, -, -, +, +, -, + where arcles are labeled

More information

Variants of Pegasos. December 11, 2009

Variants of Pegasos. December 11, 2009 Inroducon Varans of Pegasos SooWoong Ryu bshboy@sanford.edu December, 009 Youngsoo Cho yc344@sanford.edu Developng a new SVM algorhm s ongong research opc. Among many exng SVM algorhms, we wll focus on

More information

Clustering (Bishop ch 9)

Clustering (Bishop ch 9) Cluserng (Bshop ch 9) Reference: Daa Mnng by Margare Dunham (a slde source) 1 Cluserng Cluserng s unsupervsed learnng, here are no class labels Wan o fnd groups of smlar nsances Ofen use a dsance measure

More information

CHAPTER 10: LINEAR DISCRIMINATION

CHAPTER 10: LINEAR DISCRIMINATION CHAPER : LINEAR DISCRIMINAION Dscrmnan-based Classfcaon 3 In classfcaon h K classes (C,C,, C k ) We defned dscrmnan funcon g j (), j=,,,k hen gven an es eample, e chose (predced) s class label as C f g

More information

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model Probablsc Model for Tme-seres Daa: Hdden Markov Model Hrosh Mamsuka Bonformacs Cener Kyoo Unversy Oulne Three Problems for probablsc models n machne learnng. Compung lkelhood 2. Learnng 3. Parsng (predcon

More information

Robust and Accurate Cancer Classification with Gene Expression Profiling

Robust and Accurate Cancer Classification with Gene Expression Profiling Robus and Accurae Cancer Classfcaon wh Gene Expresson Proflng (Compuaonal ysems Bology, 2005) Auhor: Hafeng L, Keshu Zhang, ao Jang Oulne Background LDA (lnear dscrmnan analyss) and small sample sze problem

More information

TSS = SST + SSE An orthogonal partition of the total SS

TSS = SST + SSE An orthogonal partition of the total SS ANOVA: Topc 4. Orhogonal conrass [ST&D p. 183] H 0 : µ 1 = µ =... = µ H 1 : The mean of a leas one reamen group s dfferen To es hs hypohess, a basc ANOVA allocaes he varaon among reamen means (SST) equally

More information

Robustness Experiments with Two Variance Components

Robustness Experiments with Two Variance Components Naonal Insue of Sandards and Technology (NIST) Informaon Technology Laboraory (ITL) Sascal Engneerng Dvson (SED) Robusness Expermens wh Two Varance Componens by Ana Ivelsse Avlés avles@ns.gov Conference

More information

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS R&RATA # Vol.) 8, March FURTHER AALYSIS OF COFIDECE ITERVALS FOR LARGE CLIET/SERVER COMPUTER ETWORKS Vyacheslav Abramov School of Mahemacal Scences, Monash Unversy, Buldng 8, Level 4, Clayon Campus, Wellngon

More information

Cubic Bezier Homotopy Function for Solving Exponential Equations

Cubic Bezier Homotopy Function for Solving Exponential Equations Penerb Journal of Advanced Research n Compung and Applcaons ISSN (onlne: 46-97 Vol. 4, No.. Pages -8, 6 omoopy Funcon for Solvng Eponenal Equaons S. S. Raml *,,. Mohamad Nor,a, N. S. Saharzan,b and M.

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4 CS434a/54a: Paern Recognon Prof. Olga Veksler Lecure 4 Oulne Normal Random Varable Properes Dscrmnan funcons Why Normal Random Varables? Analycally racable Works well when observaon comes form a corruped

More information

Boosted LMS-based Piecewise Linear Adaptive Filters

Boosted LMS-based Piecewise Linear Adaptive Filters 016 4h European Sgnal Processng Conference EUSIPCO) Boosed LMS-based Pecewse Lnear Adapve Flers Darush Kar and Iman Marvan Deparmen of Elecrcal and Elecroncs Engneerng Blken Unversy, Ankara, Turkey {kar,

More information

Lecture VI Regression

Lecture VI Regression Lecure VI Regresson (Lnear Mehods for Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure VI: MLSC - Dr. Sehu Vjayakumar Lnear Regresson Model M

More information

Lecture 6: Learning for Control (Generalised Linear Regression)

Lecture 6: Learning for Control (Generalised Linear Regression) Lecure 6: Learnng for Conrol (Generalsed Lnear Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure 6: RLSC - Prof. Sehu Vjayakumar Lnear Regresson

More information

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005 Dynamc Team Decson Theory EECS 558 Proec Shruvandana Sharma and Davd Shuman December 0, 005 Oulne Inroducon o Team Decson Theory Decomposon of he Dynamc Team Decson Problem Equvalence of Sac and Dynamc

More information

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!) i+1,q - [(! ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL The frs hng o es n wo-way ANOVA: Is here neracon? "No neracon" means: The man effecs model would f. Ths n urn means: In he neracon plo (wh A on he horzonal

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machne Learnng & Percepon Insrucor: Tony Jebara SVM Feaure & Kernel Selecon SVM Eensons Feaure Selecon (Flerng and Wrappng) SVM Feaure Selecon SVM Kernel Selecon SVM Eensons Classfcaon Feaure/Kernel

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Lnear Response Theory: The connecon beween QFT and expermens 3.1. Basc conceps and deas Q: ow do we measure he conducvy of a meal? A: we frs nroduce a weak elecrc feld E, and hen measure

More information

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance INF 43 3.. Repeon Anne Solberg (anne@f.uo.no Bayes rule for a classfcaon problem Suppose we have J, =,...J classes. s he class label for a pxel, and x s he observed feaure vecor. We can use Bayes rule

More information

Solution in semi infinite diffusion couples (error function analysis)

Solution in semi infinite diffusion couples (error function analysis) Soluon n sem nfne dffuson couples (error funcon analyss) Le us consder now he sem nfne dffuson couple of wo blocks wh concenraon of and I means ha, n a A- bnary sysem, s bondng beween wo blocks made of

More information

An introduction to Support Vector Machine

An introduction to Support Vector Machine An nroducon o Suppor Vecor Machne 報告者 : 黃立德 References: Smon Haykn, "Neural Neworks: a comprehensve foundaon, second edon, 999, Chaper 2,6 Nello Chrsann, John Shawe-Tayer, An Inroducon o Suppor Vecor Machnes,

More information

WiH Wei He

WiH Wei He Sysem Idenfcaon of onlnear Sae-Space Space Baery odels WH We He wehe@calce.umd.edu Advsor: Dr. Chaochao Chen Deparmen of echancal Engneerng Unversy of aryland, College Par 1 Unversy of aryland Bacground

More information

Machine Learning 2nd Edition

Machine Learning 2nd Edition INTRODUCTION TO Lecure Sldes for Machne Learnng nd Edon ETHEM ALPAYDIN, modfed by Leonardo Bobadlla and some pars from hp://www.cs.au.ac.l/~aparzn/machnelearnng/ The MIT Press, 00 alpaydn@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/mle

More information

Computing Relevance, Similarity: The Vector Space Model

Computing Relevance, Similarity: The Vector Space Model Compung Relevance, Smlary: The Vecor Space Model Based on Larson and Hears s sldes a UC-Bereley hp://.sms.bereley.edu/courses/s0/f00/ aabase Managemen Sysems, R. Ramarshnan ocumen Vecors v ocumens are

More information

CHAPTER 2: Supervised Learning

CHAPTER 2: Supervised Learning HATER 2: Supervsed Learnng Learnng a lass from Eamples lass of a famly car redcon: Is car a famly car? Knowledge eracon: Wha do people epec from a famly car? Oupu: osve (+) and negave ( ) eamples Inpu

More information

Lecture 2 L n i e n a e r a M od o e d l e s

Lecture 2 L n i e n a e r a M od o e d l e s Lecure Lnear Models Las lecure You have learned abou ha s machne learnng Supervsed learnng Unsupervsed learnng Renforcemen learnng You have seen an eample learnng problem and he general process ha one

More information

An Effective TCM-KNN Scheme for High-Speed Network Anomaly Detection

An Effective TCM-KNN Scheme for High-Speed Network Anomaly Detection Vol. 24, November,, 200 An Effecve TCM-KNN Scheme for Hgh-Speed Nework Anomaly eecon Yang L Chnese Academy of Scences, Bejng Chna, 00080 lyang@sofware.c.ac.cn Absrac. Nework anomaly deecon has been a ho

More information

Time-interval analysis of β decay. V. Horvat and J. C. Hardy

Time-interval analysis of β decay. V. Horvat and J. C. Hardy Tme-nerval analyss of β decay V. Horva and J. C. Hardy Work on he even analyss of β decay [1] connued and resuled n he developmen of a novel mehod of bea-decay me-nerval analyss ha produces hghly accurae

More information

Fall 2010 Graduate Course on Dynamic Learning

Fall 2010 Graduate Course on Dynamic Learning Fall 200 Graduae Course on Dynamc Learnng Chaper 4: Parcle Flers Sepember 27, 200 Byoung-Tak Zhang School of Compuer Scence and Engneerng & Cognve Scence and Bran Scence Programs Seoul aonal Unversy hp://b.snu.ac.kr/~bzhang/

More information

Chapter 4. Neural Networks Based on Competition

Chapter 4. Neural Networks Based on Competition Chaper 4. Neural Neworks Based on Compeon Compeon s mporan for NN Compeon beween neurons has been observed n bologcal nerve sysems Compeon s mporan n solvng many problems To classfy an npu paern _1 no

More information

Department of Economics University of Toronto

Department of Economics University of Toronto Deparmen of Economcs Unversy of Torono ECO408F M.A. Economercs Lecure Noes on Heeroskedascy Heeroskedascy o Ths lecure nvolves lookng a modfcaons we need o make o deal wh he regresson model when some of

More information

Machine Learning Linear Regression

Machine Learning Linear Regression Machne Learnng Lnear Regresson Lesson 3 Lnear Regresson Bascs of Regresson Leas Squares esmaon Polynomal Regresson Bass funcons Regresson model Regularzed Regresson Sascal Regresson Mamum Lkelhood (ML)

More information

Comparison of Supervised & Unsupervised Learning in βs Estimation between Stocks and the S&P500

Comparison of Supervised & Unsupervised Learning in βs Estimation between Stocks and the S&P500 Comparson of Supervsed & Unsupervsed Learnng n βs Esmaon beween Socks and he S&P500 J. We, Y. Hassd, J. Edery, A. Becker, Sanford Unversy T I. INTRODUCTION HE goal of our proec s o analyze he relaonshps

More information

CS286.2 Lecture 14: Quantum de Finetti Theorems II

CS286.2 Lecture 14: Quantum de Finetti Theorems II CS286.2 Lecure 14: Quanum de Fne Theorems II Scrbe: Mara Okounkova 1 Saemen of he heorem Recall he las saemen of he quanum de Fne heorem from he prevous lecure. Theorem 1 Quanum de Fne). Le ρ Dens C 2

More information

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5 TPG460 Reservor Smulaon 08 page of 5 DISCRETIZATIO OF THE FOW EQUATIOS As we already have seen, fne dfference appromaons of he paral dervaves appearng n he flow equaons may be obaned from Taylor seres

More information

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015 /4/ Learnng Objecves Self Organzaon Map Learnng whou Exaples. Inroducon. MAXNET 3. Cluserng 4. Feaure Map. Self-organzng Feaure Map 6. Concluson 38 Inroducon. Learnng whou exaples. Daa are npu o he syse

More information

Math 128b Project. Jude Yuen

Math 128b Project. Jude Yuen Mah 8b Proec Jude Yuen . Inroducon Le { Z } be a sequence of observed ndependen vecor varables. If he elemens of Z have a on normal dsrbuon hen { Z } has a mean vecor Z and a varancecovarance marx z. Geomercally

More information

Graduate Macroeconomics 2 Problem set 5. - Solutions

Graduate Macroeconomics 2 Problem set 5. - Solutions Graduae Macroeconomcs 2 Problem se. - Soluons Queson 1 To answer hs queson we need he frms frs order condons and he equaon ha deermnes he number of frms n equlbrum. The frms frs order condons are: F K

More information

FTCS Solution to the Heat Equation

FTCS Solution to the Heat Equation FTCS Soluon o he Hea Equaon ME 448/548 Noes Gerald Reckenwald Porland Sae Unversy Deparmen of Mechancal Engneerng gerry@pdxedu ME 448/548: FTCS Soluon o he Hea Equaon Overvew Use he forward fne d erence

More information

Efficient Asynchronous Channel Hopping Design for Cognitive Radio Networks

Efficient Asynchronous Channel Hopping Design for Cognitive Radio Networks Effcen Asynchronous Channel Hoppng Desgn for Cognve Rado Neworks Chh-Mn Chao, Chen-Yu Hsu, and Yun-ng Lng Absrac In a cognve rado nework (CRN), a necessary condon for nodes o communcae wh each oher s ha

More information

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Labs. 180 Park Avenue. Florham Park, NJ 07932

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Labs. 180 Park Avenue. Florham Park, NJ 07932 A decson-heorec generalzaon of on-lne learnng and an applcaon o boosng Yoav Freund Rober E. Schapre AT&T Labs 80 Park Avenue Florham Park, NJ 07932 fyoav, schapreg@research.a.com December 9, 996 Absrac

More information

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Bell Laboratories. 600 Mountain Avenue

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Bell Laboratories. 600 Mountain Avenue A decson-heorec generalzaon of on-lne learnng and an applcaon o boosng Yoav Freund Rober E. Schapre AT&T Bell Laboraores 600 Mounan Avenue Room f2b-428, 2A-424g Murray Hll, NJ 07974-0636 fyoav, schapreg@research.a.com

More information

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany Herarchcal Markov Normal Mxure models wh Applcaons o Fnancal Asse Reurns Appendx: Proofs of Theorems and Condonal Poseror Dsrbuons John Geweke a and Gann Amsano b a Deparmens of Economcs and Sascs, Unversy

More information

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s Ordnary Dfferenal Equaons n Neuroscence wh Malab eamples. Am - Gan undersandng of how o se up and solve ODE s Am Undersand how o se up an solve a smple eample of he Hebb rule n D Our goal a end of class

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Ths documen s downloaded from DR-NTU, Nanyang Technologcal Unversy Lbrary, Sngapore. Tle A smplfed verb machng algorhm for word paron n vsual speech processng( Acceped verson ) Auhor(s) Foo, Say We; Yong,

More information

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that THEORETICAL AUTOCORRELATIONS Cov( y, y ) E( y E( y))( y E( y)) ρ = = Var( y) E( y E( y)) =,, L ρ = and Cov( y, y ) s ofen denoed by whle Var( y ) f ofen denoed by γ. Noe ha γ = γ and ρ = ρ and because

More information

Using Fuzzy Pattern Recognition to Detect Unknown Malicious Executables Code

Using Fuzzy Pattern Recognition to Detect Unknown Malicious Executables Code Usng Fuzzy Paern Recognon o Deec Unknown Malcous Execuables Code Boyun Zhang,, Janpng Yn, and Jngbo Hao School of Compuer Scence, Naonal Unversy of Defense Technology, Changsha 40073, Chna hnxzby@yahoo.com.cn

More information

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes.

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes. umercal negraon of he dffuson equaon (I) Fne dfference mehod. Spaal screaon. Inernal nodes. R L V For hermal conducon le s dscree he spaal doman no small fne spans, =,,: Balance of parcles for an nernal

More information

Performance Analysis for a Network having Standby Redundant Unit with Waiting in Repair

Performance Analysis for a Network having Standby Redundant Unit with Waiting in Repair TECHNI Inernaonal Journal of Compung Scence Communcaon Technologes VOL.5 NO. July 22 (ISSN 974-3375 erformance nalyss for a Nework havng Sby edundan Un wh ang n epar Jendra Sngh 2 abns orwal 2 Deparmen

More information

12d Model. Civil and Surveying Software. Drainage Analysis Module Detention/Retention Basins. Owen Thornton BE (Mech), 12d Model Programmer

12d Model. Civil and Surveying Software. Drainage Analysis Module Detention/Retention Basins. Owen Thornton BE (Mech), 12d Model Programmer d Model Cvl and Surveyng Soware Dranage Analyss Module Deenon/Reenon Basns Owen Thornon BE (Mech), d Model Programmer owen.hornon@d.com 4 January 007 Revsed: 04 Aprl 007 9 February 008 (8Cp) Ths documen

More information

Lecture 11 SVM cont

Lecture 11 SVM cont Lecure SVM con. 0 008 Wha we have done so far We have esalshed ha we wan o fnd a lnear decson oundary whose margn s he larges We know how o measure he margn of a lnear decson oundary Tha s: he mnmum geomerc

More information

A Novel Object Detection Method Using Gaussian Mixture Codebook Model of RGB-D Information

A Novel Object Detection Method Using Gaussian Mixture Codebook Model of RGB-D Information A Novel Objec Deecon Mehod Usng Gaussan Mxure Codebook Model of RGB-D Informaon Lujang LIU 1, Gaopeng ZHAO *,1, Yumng BO 1 1 School of Auomaon, Nanjng Unversy of Scence and Technology, Nanjng, Jangsu 10094,

More information

( ) () we define the interaction representation by the unitary transformation () = ()

( ) () we define the interaction representation by the unitary transformation () = () Hgher Order Perurbaon Theory Mchael Fowler 3/7/6 The neracon Represenaon Recall ha n he frs par of hs course sequence, we dscussed he chrödnger and Hesenberg represenaons of quanum mechancs here n he chrödnger

More information

Attribute Reduction Algorithm Based on Discernibility Matrix with Algebraic Method GAO Jing1,a, Ma Hui1, Han Zhidong2,b

Attribute Reduction Algorithm Based on Discernibility Matrix with Algebraic Method GAO Jing1,a, Ma Hui1, Han Zhidong2,b Inernaonal Indusral Informacs and Compuer Engneerng Conference (IIICEC 05) Arbue educon Algorhm Based on Dscernbly Marx wh Algebrac Mehod GAO Jng,a, Ma Hu, Han Zhdong,b Informaon School, Capal Unversy

More information

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim Korean J. Mah. 19 (2011), No. 3, pp. 263 272 GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS Youngwoo Ahn and Kae Km Absrac. In he paper [1], an explc correspondence beween ceran

More information

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms Course organzaon Inroducon Wee -2) Course nroducon A bref nroducon o molecular bology A bref nroducon o sequence comparson Par I: Algorhms for Sequence Analyss Wee 3-8) Chaper -3, Models and heores» Probably

More information

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data Anne Chao Ncholas J Goell C seh lzabeh L ander K Ma Rober K Colwell and Aaron M llson 03 Rarefacon and erapolaon wh ll numbers: a framewor for samplng and esmaon n speces dversy sudes cology Monographs

More information

General Weighted Majority, Online Learning as Online Optimization

General Weighted Majority, Online Learning as Online Optimization Sascal Technques n Robocs (16-831, F10) Lecure#10 (Thursday Sepember 23) General Weghed Majory, Onlne Learnng as Onlne Opmzaon Lecurer: Drew Bagnell Scrbe: Nahanel Barshay 1 1 Generalzed Weghed majory

More information

A Novel Iron Loss Reduction Technique for Distribution Transformers. Based on a Combined Genetic Algorithm - Neural Network Approach

A Novel Iron Loss Reduction Technique for Distribution Transformers. Based on a Combined Genetic Algorithm - Neural Network Approach A Novel Iron Loss Reducon Technque for Dsrbuon Transformers Based on a Combned Genec Algorhm - Neural Nework Approach Palvos S. Georglaks Nkolaos D. Doulams Anasasos D. Doulams Nkos D. Hazargyrou and Sefanos

More information

Video-Based Face Recognition Using Adaptive Hidden Markov Models

Video-Based Face Recognition Using Adaptive Hidden Markov Models Vdeo-Based Face Recognon Usng Adapve Hdden Markov Models Xaomng Lu and suhan Chen Elecrcal and Compuer Engneerng, Carnege Mellon Unversy, Psburgh, PA, 523, U.S.A. xaomng@andrew.cmu.edu suhan@cmu.edu Absrac

More information

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model BGC1: Survval and even hsory analyss Oslo, March-May 212 Monday May 7h and Tuesday May 8h The addve regresson model Ørnulf Borgan Deparmen of Mahemacs Unversy of Oslo Oulne of program: Recapulaon Counng

More information

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method 10 h US Naonal Congress on Compuaonal Mechancs Columbus, Oho 16-19, 2009 Sngle-loop Sysem Relably-Based Desgn & Topology Opmzaon (SRBDO/SRBTO): A Marx-based Sysem Relably (MSR) Mehod Tam Nguyen, Junho

More information

Forecasting customer behaviour in a multi-service financial organisation: a profitability perspective

Forecasting customer behaviour in a multi-service financial organisation: a profitability perspective Forecasng cusomer behavour n a mul-servce fnancal organsaon: a profably perspecve A. Audzeyeva, Unversy of Leeds & Naonal Ausrala Group Europe, UK B. Summers, Unversy of Leeds, UK K.R. Schenk-Hoppé, Unversy

More information

Tight results for Next Fit and Worst Fit with resource augmentation

Tight results for Next Fit and Worst Fit with resource augmentation Tgh resuls for Nex F and Wors F wh resource augmenaon Joan Boyar Leah Epsen Asaf Levn Asrac I s well known ha he wo smple algorhms for he classc n packng prolem, NF and WF oh have an approxmaon rao of

More information

On One Analytic Method of. Constructing Program Controls

On One Analytic Method of. Constructing Program Controls Appled Mahemacal Scences, Vol. 9, 05, no. 8, 409-407 HIKARI Ld, www.m-hkar.com hp://dx.do.org/0.988/ams.05.54349 On One Analyc Mehod of Consrucng Program Conrols A. N. Kvko, S. V. Chsyakov and Yu. E. Balyna

More information

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study)

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study) Inernaonal Mahemacal Forum, Vol. 8, 3, no., 7 - HIKARI Ld, www.m-hkar.com hp://dx.do.org/.988/mf.3.3488 New M-Esmaor Objecve Funcon n Smulaneous Equaons Model (A Comparave Sudy) Ahmed H. Youssef Professor

More information

Volatility Interpolation

Volatility Interpolation Volaly Inerpolaon Prelmnary Verson March 00 Jesper Andreasen and Bran Huge Danse Mares, Copenhagen wan.daddy@danseban.com brno@danseban.com Elecronc copy avalable a: hp://ssrn.com/absrac=69497 Inro Local

More information

CHAPTER 5: MULTIVARIATE METHODS

CHAPTER 5: MULTIVARIATE METHODS CHAPER 5: MULIVARIAE MEHODS Mulvarae Daa 3 Mulple measuremens (sensors) npus/feaures/arbues: -varae N nsances/observaons/eamples Each row s an eample Each column represens a feaure X a b correspons o he

More information

Hidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University

Hidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University Hdden Markov Models Followng a lecure by Andrew W. Moore Carnege Mellon Unversy www.cs.cmu.edu/~awm/uorals A Markov Sysem Has N saes, called s, s 2.. s N s 2 There are dscree meseps, 0,, s s 3 N 3 0 Hdden

More information

Genetic Algorithm in Parameter Estimation of Nonlinear Dynamic Systems

Genetic Algorithm in Parameter Estimation of Nonlinear Dynamic Systems Genec Algorhm n Parameer Esmaon of Nonlnear Dynamc Sysems E. Paeraks manos@egnaa.ee.auh.gr V. Perds perds@vergna.eng.auh.gr Ah. ehagas kehagas@egnaa.ee.auh.gr hp://skron.conrol.ee.auh.gr/kehagas/ndex.hm

More information

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6)

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6) Econ7 Appled Economercs Topc 5: Specfcaon: Choosng Independen Varables (Sudenmund, Chaper 6 Specfcaon errors ha we wll deal wh: wrong ndependen varable; wrong funconal form. Ths lecure deals wh wrong ndependen

More information

Multiclass Boosting for Weak Classifiers

Multiclass Boosting for Weak Classifiers Journal of Machne Learnng Research 6 2005) 89 20 Submed 6/03; Revsed 7/04; Publshed 2/05 Mulclass Boosng for Weak Classfers Günher Ebl Karl Peer Pfeffer Deparmen of Bosascs Unversy of Innsbruck Schöpfsrasse

More information

Epistemic Game Theory: Online Appendix

Epistemic Game Theory: Online Appendix Epsemc Game Theory: Onlne Appendx Edde Dekel Lucano Pomao Marcano Snscalch July 18, 2014 Prelmnares Fx a fne ype srucure T I, S, T, β I and a probably µ S T. Le T µ I, S, T µ, βµ I be a ype srucure ha

More information

Sampling Procedure of the Sum of two Binary Markov Process Realizations

Sampling Procedure of the Sum of two Binary Markov Process Realizations Samplng Procedure of he Sum of wo Bnary Markov Process Realzaons YURY GORITSKIY Dep. of Mahemacal Modelng of Moscow Power Insue (Techncal Unversy), Moscow, RUSSIA, E-mal: gorsky@yandex.ru VLADIMIR KAZAKOV

More information

Comb Filters. Comb Filters

Comb Filters. Comb Filters The smple flers dscussed so far are characered eher by a sngle passband and/or a sngle sopband There are applcaons where flers wh mulple passbands and sopbands are requred Thecomb fler s an example of

More information

Chapter 6: AC Circuits

Chapter 6: AC Circuits Chaper 6: AC Crcus Chaper 6: Oulne Phasors and he AC Seady Sae AC Crcus A sable, lnear crcu operang n he seady sae wh snusodal excaon (.e., snusodal seady sae. Complee response forced response naural response.

More information

Anomaly Detection. Lecture Notes for Chapter 9. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Anomaly Detection. Lecture Notes for Chapter 9. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar Anomaly eecon Lecure Noes for Chaper 9 Inroducon o aa Mnng, 2 nd Edon by Tan, Senbach, Karpane, Kumar 2/14/18 Inroducon o aa Mnng, 2nd Edon 1 Anomaly/Ouler eecon Wha are anomales/oulers? The se of daa

More information

Comparison of Differences between Power Means 1

Comparison of Differences between Power Means 1 In. Journal of Mah. Analyss, Vol. 7, 203, no., 5-55 Comparson of Dfferences beween Power Means Chang-An Tan, Guanghua Sh and Fe Zuo College of Mahemacs and Informaon Scence Henan Normal Unversy, 453007,

More information

Hidden Markov Models

Hidden Markov Models 11-755 Machne Learnng for Sgnal Processng Hdden Markov Models Class 15. 12 Oc 2010 1 Admnsrva HW2 due Tuesday Is everyone on he projecs page? Where are your projec proposals? 2 Recap: Wha s an HMM Probablsc

More information

Neural Networks-Based Time Series Prediction Using Long and Short Term Dependence in the Learning Process

Neural Networks-Based Time Series Prediction Using Long and Short Term Dependence in the Learning Process Neural Neworks-Based Tme Seres Predcon Usng Long and Shor Term Dependence n he Learnng Process J. Puchea, D. Paño and B. Kuchen, Absrac In hs work a feedforward neural neworksbased nonlnear auoregresson

More information

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment EEL 6266 Power Sysem Operaon and Conrol Chaper 5 Un Commmen Dynamc programmng chef advanage over enumeraon schemes s he reducon n he dmensonaly of he problem n a src prory order scheme, here are only N

More information

Ensemble Confidence Estimates Posterior Probability

Ensemble Confidence Estimates Posterior Probability Ensemble Esimaes Poserior Probabiliy Michael Muhlbaier, Aposolos Topalis, and Robi Polikar Rowan Universiy, Elecrical and Compuer Engineering, Mullica Hill Rd., Glassboro, NJ 88, USA {muhlba6, opali5}@sudens.rowan.edu

More information

Including the ordinary differential of distance with time as velocity makes a system of ordinary differential equations.

Including the ordinary differential of distance with time as velocity makes a system of ordinary differential equations. Soluons o Ordnary Derenal Equaons An ordnary derenal equaon has only one ndependen varable. A sysem o ordnary derenal equaons consss o several derenal equaons each wh he same ndependen varable. An eample

More information

Forecasting Using First-Order Difference of Time Series and Bagging of Competitive Associative Nets

Forecasting Using First-Order Difference of Time Series and Bagging of Competitive Associative Nets Forecasng Usng Frs-Order Dfference of Tme Seres and Baggng of Compeve Assocave Nes Shuch Kurog, Ryohe Koyama, Shnya Tanaka, and Toshhsa Sanuk Absrac Ths arcle descrbes our mehod used for he 2007 Forecasng

More information

e-journal Reliability: Theory& Applications No 2 (Vol.2) Vyacheslav Abramov

e-journal Reliability: Theory& Applications No 2 (Vol.2) Vyacheslav Abramov June 7 e-ournal Relably: Theory& Applcaons No (Vol. CONFIDENCE INTERVALS ASSOCIATED WITH PERFORMANCE ANALYSIS OF SYMMETRIC LARGE CLOSED CLIENT/SERVER COMPUTER NETWORKS Absrac Vyacheslav Abramov School

More information

Pattern Classification (III) & Pattern Verification

Pattern Classification (III) & Pattern Verification Preare by Prof. Hu Jang CSE638 --4 CSE638 3. Seech & Language Processng o.5 Paern Classfcaon III & Paern Verfcaon Prof. Hu Jang Dearmen of Comuer Scence an Engneerng York Unversy Moel Parameer Esmaon Maxmum

More information

Approximate, Computationally Efficient Online Learning in Bayesian Spiking Neurons

Approximate, Computationally Efficient Online Learning in Bayesian Spiking Neurons Approxmae, Compuaonally Effcen Onlne Learnng n Bayesan Spkng Neurons Levn Kuhlmann levnk@unmelb.edu.au NeuroEngneerng Laboraory, Deparmen of Elecrcal & Elecronc Engneerng, The Unversy of Melbourne, and

More information

Constrained-Storage Variable-Branch Neural Tree for. Classification

Constrained-Storage Variable-Branch Neural Tree for. Classification Consraned-Sorage Varable-Branch Neural Tree for Classfcaon Shueng-Ben Yang Deparmen of Dgal Conen of Applcaon and Managemen Wenzao Ursulne Unversy of Languages 900 Mnsu s oad Kaohsng 807, Tawan. Tel :

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

MCs Detection Approach Using Bagging and Boosting Based Twin Support Vector Machine

MCs Detection Approach Using Bagging and Boosting Based Twin Support Vector Machine Proceedngs of he 009 IEEE Inernaonal Conference on Sysems, Man, and Cybernecs San Anono, TX, USA - Ocober 009 MCs Deecon Approach Usng Baggng and Boosng Based Twn Suppor Vecor Machne Xnsheng Zhang School

More information

Improved Classification Based on Predictive Association Rules

Improved Classification Based on Predictive Association Rules Proceedngs of he 009 IEEE Inernaonal Conference on Sysems, Man, and Cybernecs San Anono, TX, USA - Ocober 009 Improved Classfcaon Based on Predcve Assocaon Rules Zhxn Hao, Xuan Wang, Ln Yao, Yaoyun Zhang

More information

ECE 366 Honors Section Fall 2009 Project Description

ECE 366 Honors Section Fall 2009 Project Description ECE 366 Honors Secon Fall 2009 Projec Descrpon Inroducon: Muscal genres are caegorcal labels creaed by humans o characerze dfferen ypes of musc. A muscal genre s characerzed by he common characerscs shared

More information

FACIAL IMAGE FEATURE EXTRACTION USING SUPPORT VECTOR MACHINES

FACIAL IMAGE FEATURE EXTRACTION USING SUPPORT VECTOR MACHINES FACIAL IMAGE FEATURE EXTRACTION USING SUPPORT VECTOR MACHINES H. Abrsham Moghaddam K. N. Toos Unversy of Technology, P.O. Box 635-355, Tehran, Iran moghadam@saba.knu.ac.r M. Ghayoum Islamc Azad Unversy,

More information

Existence and Uniqueness Results for Random Impulsive Integro-Differential Equation

Existence and Uniqueness Results for Random Impulsive Integro-Differential Equation Global Journal of Pure and Appled Mahemacs. ISSN 973-768 Volume 4, Number 6 (8), pp. 89-87 Research Inda Publcaons hp://www.rpublcaon.com Exsence and Unqueness Resuls for Random Impulsve Inegro-Dfferenal

More information

Reactive Methods to Solve the Berth AllocationProblem with Stochastic Arrival and Handling Times

Reactive Methods to Solve the Berth AllocationProblem with Stochastic Arrival and Handling Times Reacve Mehods o Solve he Berh AllocaonProblem wh Sochasc Arrval and Handlng Tmes Nsh Umang* Mchel Berlare* * TRANSP-OR, Ecole Polyechnque Fédérale de Lausanne Frs Workshop on Large Scale Opmzaon November

More information

ISSN MIT Publications

ISSN MIT Publications MIT Inernaonal Journal of Elecrcal and Insrumenaon Engneerng Vol. 1, No. 2, Aug 2011, pp 93-98 93 ISSN 2230-7656 MIT Publcaons A New Approach for Solvng Economc Load Dspach Problem Ansh Ahmad Dep. of Elecrcal

More information

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD Journal of Appled Mahemacs and Compuaonal Mechancs 3, (), 45-5 HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD Sansław Kukla, Urszula Sedlecka Insue of Mahemacs,

More information

Short-Term Load Forecasting Using PSO-Based Phase Space Neural Networks

Short-Term Load Forecasting Using PSO-Based Phase Space Neural Networks Proceedngs of he 5h WSEAS In. Conf. on SIMULATION, MODELING AND OPTIMIZATION, Corfu, Greece, Augus 7-9, 005 (pp78-83) Shor-Term Load Forecasng Usng PSO-Based Phase Space Neural Neworks Jang Chuanwen, Fang

More information