Ensemble of Classifiers Based Incremental Learning with Dynamic Voting Weight Update

Size: px
Start display at page:

Download "Ensemble of Classifiers Based Incremental Learning with Dynamic Voting Weight Update"

Transcription

1 Ensemble of Classfers Based Incremenal Learnng wh Dynamc Vong Wegh Updae Rob Polkar, Sefan Krause and Lyndsay Burd Elecrcal and Compuer Engneerng, Rowan Unversy, 36 Rowan Hall, Glassboro, NJ 828, USA. Absrac An ncremenal learnng algorhm based on weghed majory vong of an ensemble of classfers s nroduced for supervsed neural neworks, where he vong weghs are updaed dynamcally based on he curren es npu of unknown class. The algorhm s dynamc vong wegh updae feaure s an enhancemen o our prevously nroduced ncremenal learnng algorhm, Learn++. The algorhm s capable of ncremenally learnng new nformaon from addonal daases ha may laer become avalable, even when he new daases nclude nsances from addonal classes ha were no prevously seen. Furhermore, he algorhm reans formerly acqured knowledge whou requrng access o daases used earler, aanng a delcae balance on he sably-plascy dlemma. The algorhm creaes addonal ensembles of classfers based on an eravely updaed dsrbuon funcon on he ranng daa ha favors ranng wh ncreasngly dffcul o learn, prevously no learned and/or unseen nsances. The fnal classfcaon s made by weghed majory vong of all classfer oupus n he ensemble, where he vong weghs are deermned dynamcally durng acual esng, based on he esmaed performance of each classfer on he curren es daa nsance. We presen he algorhm n s enrey, as well as s promsng smulaon resuls on wo real world applcaons. I. INTRODUCTION A. Incremenal Learnng As mos researchers n machne learnng are panfully aware, he generalzaon performance of any learnng algorhm s acuely conngen upon he avalably of an adequae and represenave ranng daase. Ofen mes, however, acquson of such daa s edous, me consumng and expensve. Therefore, s no unusual for such daa o become avalable n baches over a perod of me. Furhermore, s also no unusual for daa belongng o dfferen classes o be acqured n separae daa acquson epsodes. Dependng on he exac naure of he applcaon, wang for he enre daa o become avalable can be neffecve, fnancally uneconomcal, echncally mprovden, or even unfeasble parcularly f he exac naure of fuure daa s unknown. Under such scenaros, would be more effecve o be able o sar ranng a classfer wh he exsng daa, and hen ncremenally updae hs classfer o accommodae new daa whou, of course, compromsng classfcaon performance on prevously seen daa. Snce mos of he commonly used classfers, ncludng he ubquous mullayer percepron (MLP) and he radal bass funcon (RBF) neworks are unable o accommodae such ncremenal learnng, he praccal approach has radonally been dscardng he exsng classfer and sarng from scrach by combnng all daa accumulaed hus far every me a new daase becomes avalable. Ths approach resuls n loss of all prevously learned knowledge, a phenomenon known as caasrophc forgeng. Furhermore, he combnaon of old and new daases s no even always possble f prevous daases are los, dscarded, corruped, naccessble, or oherwse unavalable. Incremenal learnng of new nformaon whou forgeng wha s prevously learned rases he so-called sably plascy dlemma []: some nformaon may have o be los o learn new nformaon, as learnng new paerns wll end o overwre formerly acqured knowledge. Thus, a sable classfer can preserve exsng knowledge, bu canno accommodae new nformaon, whereas a plasc classfer can learn new nformaon, bu canno rean pror knowledge. The ssue a hand s hen, f, when and how much should one be sacrfced for he oher o acheve a meanngful balance. Varous defnons and nerpreaons of ncremenal learnng can be found n leraure. A represenave, ye ceranly no exhausve, ls of references can be found n [2]. For he purposes of hs work, an algorhm possesses ncremenal learnng capables, f mees he followng crera: () ably o acqure addonal knowledge when new daases are nroduced, whou requrng access o prevously seen daa; (2) ably o rean a meanngful poron of he prevously learned nformaon; and (3) ably o learn new classes f nroduced by new daa. Algorhms referenced n [2], as well as many ohers are all capable of learnng new nformaon; hough hey sasfy he above-menoned crera only a varyng degrees: hey eher requre access o prevously seen daa, forge subsanal amoun of pror knowledge along he way, or canno accommodae new classes. One promnen excepon s he (fuzzy) ARTMAP algorhm [3] along wh s many recen varaons. However, has long been known ha ARTMAP s very sensve o selecon of s vglance parameer, o nose levels n he ranng daa and o he order n whch he ranng daa are presened o he algorhm. Furhermore, he algorhm ofen suffers from overfng problems, f he vglance parameer s no chosen appropraely. Varous approaches have been and are beng suggesed o overcome hese dffcules [4, 5, 6, 7, 8]. Recenly we have suggesed Learn++ as an alernae approach o he growng ls of ncremenal learnng algorhms [2,9]. Learn++, based on he weghed majory vong of an ensemble of classfers, sasfes he above menoned crera. However, he weghs for he majory vong are se durng ranng and reman consan (hence sac /3/$7. 23 IEEE 277

2 weghs) afer he ranng. In hs paper, we presen a modfed verson of he algorhm, where vong weghs are updaed dynamcally by esmang whch of he classfers are lkely o correcly classfy any gven es nsance. B. Ensemble of Classfers Learn++ akes advanage of he synergsc power of an ensemble of classfers n learnng a concep usng he dvde and conquer approach. The algorhm s n par nspred by he AdaBoos (adapve boosng) algorhm [], orgnally developed o mprove he classfcaon performance of weak classfers. In essence, an ensemble of weak classfers are raned usng dfferen dsrbuons of ranng samples, whose oupus are hen combned usng he weghed majory-vong scheme [] o oban he fnal classfcaon rule. The approach explos he so-called nsably of he weak classfers, whch allows he classfers o consruc suffcenly dfferen decson boundares for mnor modfcaons n her ranng daases, causng each classfer o make dfferen errors on any gven nsance. A sraegc combnaon of hese classfers hen elmnaes he ndvdual errors, generang a srong classfer. Usng ensemble of classfers has been well researched for mprovng classfer accuracy [2, 3, 4, 5, 6]; however, s poenal for addressng he ncremenal learnng problem has been mosly unexplored. Learn++ was developed n response o recognzng he poenal feasbly of ensemble of classfers n solvng he ncremenal learnng problem. Learn++ was frs nroduced n [7, 8], as an ncremenal learnng algorhm for MLP neworks. More recenly we showed ha Learn++ s acually que versale, as works wh any supervsed classfcaon algorhm [9]. In s orgnal form, Learn++ combnes he classfers usng weghed majory vong, where he vong weghs are deermned by ndvdual performances of he classfers on her own ranng daa. We realze ha hs s sub opmal, as a classfer ha performs well on s own ranng daa need no perform equally well on daa comng from a dfferen poron of he npu space. In hs paper, we frs nroduce he modfed Learn++ algorhm, whch uses a sascal-dsance-merc based procedure for esmang whch classfers are lkely o correcly classfy a gven unknown nsance. Hgher weghs are hen assgned o hose classfers esmaed o perform well on he gven nsance. II. LEARN++ Learn++ generaes a se of classfers (hypoheses) and combnes hem hrough weghed majory vong of he classes predced by he ndvdual hypoheses. The hypoheses are generaed by ranng a weak classfer, usng nsances drawn from eravely updaed dsrbuons of he ranng daabase. The dsrbuon updae rule used by Learn++ s desgned o accommodae addonal daases, n parcular hose ha nroduce prevously unseen classes. Each classfer s raned usng a subse of examples drawn from a weghed dsrbuon ha gves hgher weghs o examples msclassfed by he prevous ensemble. The pseudocode of he algorhm s provded n Fg.. For each daabase D k, k=,,k ha becomes avalable o he algorhm, he npus o Learn++ are () labeled ranng daa S k = {( x, y ) =, L, } where x and y are ranng nsances and her correc classes, respecvely ; (2) a weaklearnng algorhm BaseClassfer; and (3) an neger T k, he maxmum number of classfers o be generaed. For brevy we drop he subscrp k from all oher varables. BaseClassfer can be any supervsed algorhm ha acheves a leas 5% correc classfcaon on S k afer beng raned on a subse of S k. Ths ensures ha he classfer s suffcenly weak, ye srong enough o ensure a leas a meanngful classfcaon performance. We also noe ha usng weak classfers has he addonal advanage of rapd ranng and overfng avodance, snce hey only generae a gross approxmaon of he underlyng decson boundary. A each eraon, Learn++ frs nalzes a dsrbuon D, by normalzng a se of weghs, w, assgned o nsances based on her ndvdual classfcaon by he curren ensemble (sep ) m D = w w. () = Learn++ hen dvdes S k no wo muually exclusve subses by drawng a ranng subse TR and a es subse TE accordng o D (sep 2). Unless here s pror reason o choose oherwse, D s nally se o be unform, gvng equal probably o each nsance o be seleced no TR. Learn++ hen calls BaseClassfer o generae hypohess h (sep 3). The error of h s compued on S k = TR U TE by addng he dsrbuon weghs of msclassfed nsances (sep 4) ε [ h ( x ) y ] (2) h ( x ) y = where [ ] s f he predcae s rue, and oherwse. If ε > ½, curren h s dscarded and a new h s generaed from a fresh se of TR and TE. If ε < ½, hen normalzed error β s compued as β = ε ( ε ), < β <. (3) All hypoheses generaed durng he prevous eraons are hen combned usng weghed majory vong (sep 5), o consruc he compose hypohess H H = arg max log : h β. (4) H decdes on he wnnng class ha receves he hghes oal voe. In he orgnal Learn++ algorhm, he vong weghs were deermned based on he normalzed errors β : hypoheses wh lower normalzed errors are gven larger weghs, so ha he classes predced by a hypohess wh a proven record are weghed more heavly. The log funcon s used o conrol he explosve effec of very low β values assocaed wh classfers ha perform well durng ranng. 277

3 Algorhm Learn++ Inpu: For each daase drawn from D k k=,2,,k Sequence of m k examples S = {( x, y ) =, L, m } Weak learnng algorhm BaseClassfer. Ineger T k, specfyng he number of eraons. Do for each k=,2,,k: Inalze w = D = m,, =,2, L, m Do for =,2,...,T k : m. Se D = w w so ha D s a dsrbuon. = 2. Draw ranng TR and esng TE subses from D. 3. Call BaseClassfer o be raned wh TR. 4. Oban a hypohess h : X Y, and calculae he error of h : ε on TR + TE. (x ) y h k If ε > ½, dscard h and go o sep 2. Oherwse, compue normalzed error as β =ε / (-ε ). 5. Call dynamcally weghed majory vong o oban compose hypohess H = arg max DW : h 6. Compue he error of he compose hypohess E [ H ( x ) y ] H ( x ) y = 7. Se B = E /(-E ), and updae he weghs: B, f H ( x ) w + = w, oherwse [ H ( ) w B x y = ] Call Dynamcally weghed majory vong and Oupu he fnal hypohess: K H fnal ( x ) = arg max DW k= : h Fg. Algorhm Learn++ Whle hs rule makes nuve sense, and n fac performed remarkably well n a number of smulaons [9], s neverheless sub opmal. Ths s because he weghs are deermned and fxed pror o esng based on ndvdual performances of hypoheses on her own ranng daa subse. A rule ha dynamcally esmaes whch hypoheses are lkely o correcly classfy an unlabeled nsance, o gve hgher vong weghs o hose hypoheses would be more opmal. We herefore change he compose hypohess expresson as k H = arg max DW (5) : h where DW (x) s he nsance-specfc dynamc wegh assgned o nsance x by he hypohess h. Dynamc weghs are deermned usng a Mahalanobs-dsance-based esmaed lkelhood of h for correcly classfyng x, as descrbed below. The compose error E made by H s hen compued as he sum of dsrbuon weghs of nsances msclassfed by H (sep 6) E [ H ( x ) y ]. (6) H ( x ) y = The compose normalzed error s smlarly compued as B = E E, < B <. (7) The weghs w () are hen updaed, for compung he nex dsrbuon D +, whch n urn s used n selecng he nex ranng and esng subses, TR + and TE +, respecvely (sep 7) B, f H ( x ) w + = w, oherwse. (8) [ H ( ) w ( ) x y = ] B Ths rule reduces he weghs of hose nsances correcly classfed by he compose hypohess H by a facor of B (snce < B < ), whereas leaves he weghs of msclassfed nsances unchanged. Afer normalzaon (n sep of eraon +), he probably of correcly classfed nsances beng chosen no TR + s reduced, whle hose of msclassfed ones are effecvely ncreased. Therefore, he algorhm focuses on nsances ha are dffcul o classfy, or nsances ha have no ye been properly learned. Ths approach allows ncremenal learnng by concenrang on newly nroduced nsances, parcularly hose comng from prevously unseen classes, as hese are precsely hose nsances ha have no been learned ye. We emphasze he nroducon of he compose hypohess H by Learn++, whch along wh he wegh updae rule, unquely allows Learn++ o learn new classes. I s emprcally observed ha he procedure fals o learn new classes, f nsead he wegh updae rule were based on he performance of h only (as AdaBoos does). The wegh updae rule based on compose hypohess performance allows Learn++ o focus on hose nsances ha have no been learned by he curren ensemble, raher hen he prevous hypohess. Afer T k hypoheses are generaed for each daabase D k, he fnal hypohess H fnal s obaned by combnng all hypoheses ha have been generaed hus far usng he dynamcally weghed majory-vong rule choosng he class ha receves he hghes oal voe among all classfers, K H fnal ( x ) = arg max DW. (9) k= : h 2772

4 To deermne he dynamc vong weghs, we use he Mahalanobs dsance, o compue he dsance of he unknown nsance o he daases used o ran ndvdual classfers. Classfers raned wh daases closer o he unknown nsance are hen gven larger weghs. Noe ha hs approach does no requre he ranng daa o be saved, bu only he mean and covarance marces, whch are ypcally much smaller han he orgnal daa. In Learn++, we frs defne TR c as a subse of TR, he ranng daa used durng he h eraon, where TR c ncludes only hose nsances of TR ha belong o class c, ha s, { x x TR & c} U C TRc = y = TR TR c c= = () where C s he oal number of classes. The class specfc Mahalanobs dsance from an unknown nsance x o TR c, s hen compued as M c x T ( ) = ( x m ) C ( x m ) c c c, c =,2, L, C () where m c s he mean and C c s he covarance marx of TR c. For any nsance x, he Mahalanobs dsance based dynamc wegh of he h hypohess can hen be obaned as DW =, c =,2, L, C. =, L, T (2) mn( M c ) where T s he oal number of hypoheses generaed. The Mahalanobs dsance merc mplcly assumes ha he daa s drawn from a Gaussan dsrbuon, whch n general s no he case. However, hs merc provded promsng resuls demonsrang s effecveness. Oher dsance mercs ha do no make hs assumpon wll also be evaluaed. III. LEARN++ SIMULATION RESULTS The orgnal Learn++ algorhm usng he sac weghed majory vong has been esed on a number of benchmark and oher real world daabases, whose resuls are provded n [2,9]. In hs paper, we presen smulaon resuls of usng Learn++ wh dynamcally updaed weghed majory vong on wo real-world ncremenal learnng problems ha nroduce new classes wh addonal daa. A. Ulrasonc Weld Inspecon (UWI) Daabase Ths raher challengng daabase was obaned by ulrasonc scannng of weldng regons of varous sanless or carbon seel srucures. The weldng regons, known as heaaffeced zones, are hghly suscepble o growng a varey of defecs, ncludng poenally dangerous cracks. The dsconnues whn he maeral, such as he ar gaps due o cracks, cause he ulrasonc wave o be refleced back and receved by he ransducer. The refleced ulrasonc wave, also called he A-scan, serves as he sgnaure paern of he dsconnuy, whch s hen analyzed o deermne s ype. Ths analyss, however, s hampered by he presence of oher ypes of dsconnues, such as porosy (POR), slag and lack of fuson (LOF), all of whch generae very smlar A- scans, creang hghly overlappng paerns n he feaure space. Represenave A-scans are shown n Fg.2 Fg. 2 Represenave normalzed A-scans of (a) crack, (b) Lack of fuson, (c) slag, (d) porosy Ths four-class daabase was dvded no hree ranng daases, S ~ S 3, and a valdaon se, TEST. Table presens he daa dsrbuon n each daase. We noe ha each addonal daabase nroduced a new class: n parcular, S had nsances only from crack and LOF, S 2 nroduced slag nsances and S 3 nroduced porosy nsances. TEST daase ncluded nsances from all classes. Only S k was used durng he k h ranng sesson TS k. Therefore, prevously used daa were no made avalable o Learn++ n fuure ranng sessons. Table 2 summarzes he ranng and generalzaon performances of he algorhm afer each ranng sesson. The classfcaon performances on ranng and esng daases are shown n rows labeled by S, S 2, S 3, and TEST as obaned afer each ranng sesson, TS k. The numbers n parenheses ndcae he number of weak classfers generaed n each ranng sesson. Classfer generaon connued unl he generalzaon performance on he TEST daase, shown n he las row, reached a seady sauraon value. The weak learner used o generae ndvdual hypoheses was a sngle hdden layer MLP wh 5 hdden layer nodes. The mean square error goals of all MLPs were prese o a value of.2 o preven over-fng and o ensure suffcenly weak learnng. We noe ha any neural nework can be urned no a weak learnng algorhm by selecng s number of hdden layers and he number of hdden layer nodes small, and he error goal hgh, wh respec o he complexy of he problem. 2773

5 TABLE. DATA DISTRIBUTION FOR THE UWI DATABASE Daase LOF SLAG CRACK POR S 3 3 S S TEST TABLE 2. LEARN++ PERFORMANCE ON THE UWI DATABASE Daase TS (8) TS 2 (27) TS 3 (43) S 99.2% 89.2% 88.2% S % 88.% S % TEST 57.% 7.5% 83.8% Several observaons can be made from Table 2: () The generalzaon performances on TEST daase seadly ncrease, ndcang ha Learn++ s ndeed able o learn new nformaon, as well as new classes as hey become avalable; () In general, larger numbers of classfers are requred for ncremenal learnng of new classes. () There s an occasonal declne on ranng daa performances ndcang some loss, albe mnor, of prevously acqured knowledge as new nformaon s acqured. Ths s expeced due o sably-plascy dlemma; (v) Near 5% generalzaon performance afer TS makes nuve sense, snce a ha me he algorhm had only seen wo of he four classes ha appear n he TEST daa. The performance gradually ncreases proporonal o he rao of he classes seen, as hey become avalable. As a performance comparson, he same daabase was also used o ran and es a sngle srong learner, a 49x4x2x4 wo hdden layer MLP wh an error goal of.. The bes es daa classfcaon performance of he srong learner has been around 75%, despe he fac ha he srong learner was raned wh nsances from all classes. B. Volale Organc Compound (VOC)Daabase Ths daabase was obaned from responses of sx quarz crysal mcrobalances (QCMs) o varous concenraons of fve volale organc compounds (VOCs), ncludng ehanol (ET), xylene (XL), ocane (OC), oluene (TL), and rchloroehelene (TCE). When QCMs are exposed o VOCs, he molecular mass deposed on her crysal surface alers her resonan frequency, whch can be measured usng a frequency couner or a nework analyzer. By usng an array of QCMs, each coaed wh a dfferen polymer sensve o specfc VOCs, he collecve response of he array can be used as a sgnaure paern of he VOC. However, QCMs have very lmed selecvy, makng he denfcaon a challengng ask. Represenave paers of sensor responses for each VOC are shown n Fg. 3. Smlar o he UWI daabase, he VOC denfcaon daabase was dvded no hree ranng daases S ~ S 3 and one valdaon daase, TEST. The daa dsrbuon s shown n Table 3, whch was specfcally based owards he new class: daabase S had nsances from ET, OC and TL, S 2 added nsances manly from TCE (and very few from he prevous hree), and S 3 added nsances from XL (and very few from he prevous four). TEST se ncluded nsances from all classes. The base classfer used for hs daabase was also a MLP ype neural nework wh a 6x3x5 archecure, and an error goal of.5. Table 4 presens he ranng and generalzaon performances of he algorhm on hs daabase. Ehanol TCE Toluene Xylene Ocane Fg. 3 Represenave QCM sensor responses o fve VOCs TABLE 3.DATA DISTRIBUTION FOR THE VOC DATABASE Daase ET OC TL TCE XL S S2 25 S3 5 4 TEST TABLE 4. LEARN++ PERFORMANCE ON THE VOC DATABASE Daase TS (7) TS 2 () TS 3 (6) S % 97.5% 92.5% S % 96.4% S % TEST 57.3% 67.% 89.6% Performance fgures lsed above follow a smlar rend o hose of he UWI daabase. The seady ncrease on he generalzaon performance ndcaes ha Learn++ was able o ncremenally learn addonal nformaon provded by he new classes. We have evaluaed Learn++ numerous mes for each daabase wh slghly dfferen algorhm parameers, such as he order of nroducon of he classes, base classfer archecure and/or error goal, he percenage of ranng daa used as ranng and esng subses, ec. and even o a grea exend he choce of base classfer. In all cases, he algorhm seemed o be remarkably ressan o such changes. IV. DISCUSSIONS & CONCLUSIONS In hs paper, we nroduced a new verson of he Learn++ algorhm. Learn++ essenally adds he ncremenal learn- 2774

6 ng capably o any supervsed neural nework ha normally does no possess hs propery. As demonsraed by he resuls presened n hs and our prevous papers, Learn++ s ndeed able o learn new nformaon provded by addonal daabases, even when he new classes are nroduced by he consecuve daases. Learn++ akes advanage of he synergsc expressve power of an ensemble of weak classfers, where each classfer s raned wh a ranng daa subse drawn from a sraegcally updaed dsrbuon of he ranng daa. Indvdual classfers are hen combned usng he weghed majory-vong rule o oban he fnal classfer. In hs paper, we proposed an alernae weghed majory vong sraegy, where he vong weghs are deermned dynamcally for each nsance, based on he esmaed lkelhood of he hypoheses o correcly classfy ha nsance. The nuve dea behnd hs approach s ha hose classfers raned wh a daase ha ncluded nearby nsances n he Mahalanobs dsance sense o he unknown nsance are more lkely o correcly classfy he unknown nsance. I can be argued ha he classfcaon decson s already beng made by he choce of he class provdng he mnmum Mahalanobs dsance, snce f nsance x belongs o a parcular class, and nsances from ha class have been used n he curren daase, hen he Mahalanobs dsance beween x and TR c s lkely o be mnmum among all ohers. Ths s ndeed rue for daases wh non-overlappng classes wh nose-free well-behavng dsrbuons. In pracce, however, hs s rarely he case, and a decson based on he Mahalanobs dsance only nvarably acheves poor classfcaon performances on challengng daases, such as he UWI and VOC daases. We emphasze ha he Mahalanobs dsance s no used drecly for makng a classfcaon decson, bu raher o assgn a wegh o compeng hypoheses. Through ou he smulaons, a number of addonal observaons were made, no readly apparen from he ables. In parcular, we have esed he sensvy of Learn++ o he order of presenaon of he daa, as well as o mnor changes n s parameers. We found ou ha he classfcaon performance of Learn++ was vrually he same, regardless of he order n whch daabases (and herefore he classes) were presened o he algorhm. Furhermore, he algorhm was consderably robus o changes n s nernal parameers, such as he nework sze, error goal, he number of hypoheses generaed, and even he ype of base classfer. Fnally, unlke mos oher supervsed classfers, Learn++ does no suffer from caasrophc forgeng, snce prevously generaed classfers are reaned. Due o he sably plascy dlemma, some knowledge s ndeed forgoen whle new nformaon s beng learned; however, hs appears o be nsgnfcan, as ndcaed by he seady mprovemen n he generalzaon performance. One mgh also wonder wha generalzaon performance could be acheved f he enre daabase were avalable for srong learnng. Tranng a srong classfer usng he enre ranng daabase, we oban performances n he md 7% o hgh 8% range, smlar o, or even slghly worse han hose of Learn++. Ths raher sasfyng resul furher ndcaes he feasbly of Learn++ as an alernave o oher ncremenal learnng algorhms, as well as o non-ncremenal srong classfers. ACKNOWLEDGEMENT Ths maeral s based upon work suppored by he Naonal Scence Foundaon under Gran No. ECS REFERENCES [] S. Grossberg, Nonlnear neural neworks: prncples, mechansms and archecures, Neural Neworks, vol., no., pp. 7-6, 988. [2] R. Polkar, L. Udpa L, S. Udpa, and V. Honavar, Learn++: An ncremenal learnng algorhm for supervsed neural neworks, IEEE Transacons on Sysem, Man and Cybernecs (C), Specal Issue on Knowledge Managemen, vol. 3, no. 4, pp , 2. [3] G.A. Carpener, S. Grossberg, N. Markuzon, J.H. Reynolds, and D.B. Rosen, Fuzzy ARTMAP: A neural nework archecure for ncremenal supervsed learnng of analog muldmensonal maps, IEEE Trans. on Neural Neworks, vol. 3, no. 5, pp , 992. [4] J.R. Wllamson, Gaussan ARTMAP: a neural nework for fas ncremenal learnng of nosy muldmensonal maps, Neural Neworks, vol. 9, no. 5, pp , 996. [5] C.P. Lm and R.F. Harrson, An ncremenal adapve nework for onlne supervsed learnng and probably esmaon, Neural Neworks, vol., no. 5, pp , 997. [6] F. H. Haer, Lfe-long learnng cell srucures connuously learnng whou caasrophc nerference, Neural Neworks, vol. 4, no. 4-5, pp , 2. [7] G.C. Anagnosopoulos and M. Georgopoulos, Ellpsod ART and ARTMAP for ncremenal cluserng and classfcaon, In. Jon Conf. on Neural Neworks (IJCNN 2), vol. 2, pp , 2. [8] E. Gomez-Sanchez, Y.A. Dmrads, J.M. Cano-Izquerdo, J. Lopez- Coronado, Safe-µARTMAP: A new soluon for reducng caegory prolferaon n Fuzzy ARTMAP, Proc. of In. Jon Conf. on Neural Neworks (IJCNN 2), vol.2, pp , 2. [9] R. Polkar, J. Byorck, S. Krause, A. Marno, and M. Moreon, Learn++: A classfer ndependen ncremenal learnng algorhm for supervsed neural neworks, Proc. of In. Jon Conference on Neural Neworks (IJCNN 22), vol.2, pp , Honolulu, HI, 22. [] Y. Freund and R. Schapre, A decson heorec generalzaon of onlne learnng and an applcaon o boosng, Compuer and Sysem Scences, vol. 57, no., pp. 9-39, 997. [] N. Llesone and M. Warmuh, Weghed majory algorhm, Informaon and Compuaon, vol. 8, pp , 994. [2]L.K. Hansen and P. Salamon, Neural nework ensembles, IEEE Transacons on Paern Analyss and Machne Inellgence, vol. 2, no., pp. 993-, 99. [3] M.I. Jordan and R.A. Jacobs, Herarchcal mxures of expers and he EM algorhm, Proc. of In. Jon Conf. on Neural Neworks (IJCNN 993), pp , 993. [4] J. Kler, M. Haef, R.P. Dun, J. Maas, On combnng classfers, IEEE Trans. on Paern Analyss and Machne Inellgence, vol. 2, no.3, pp , 998. [5] T.G. Deerch, An expermenal comparson of hree mehods for consrucng ensembles of decson rees: Baggng, boosng and randomzaon, Machne Learnng, vol. 4, no. 2, pp. -9, 2. [6] L.I. Kuncheva, A heorecal sudy on sx classfer fuson saraeges, IEEE Transacons on Paern Analyss and Machne Inellgence, vol. 24, no. 2, pp , 22. [7] R. Polkar, Algorhms for enhancng paern separably, feaure selecon and ncremenal learnng wh applcaons o gas sensng elecronc nose sysems, Ph.D. dsseraon, Iowa Sae Unversy, Ames, IA, 2. [8] R. Polkar, L. Udpa, S. Udpa, V. Honavar, Learn++: An ncremenal learnng algorhm for mullayer neural neworks, Proc. of IEEE In. Conf. on Acouscs, Speech and Sgnal Proc., vol. 6, pp ,

Confidence Estimation Using the Incremental Learning Algorithm, Learn++

Confidence Estimation Using the Incremental Learning Algorithm, Learn++ Confdence Esmaon Usng he Incremenal Learnng Algorhm, Learn++ Jeffrey Byorck and Rob Polkar Elecrcal and Compuer Engneerng, Rowan Unversy, 136 Rowan Hall, Glassboro, NJ 08028, USA. byor4610@sudens.rowan.edu,

More information

Dynamically Weighted Majority Voting for Incremental Learning and Comparison of Three Boosting Based Approaches

Dynamically Weighted Majority Voting for Incremental Learning and Comparison of Three Boosting Based Approaches Proceedngs of Inernaonal Jon Conference on Neural Neworks, Monreal, Canada, July 3 - Augus 4, 2005 Dynamcally Weghed Majory Vong for Incremenal Learnng and Comparson of Three Boosng Based Approaches Alasgar

More information

Introduction to Boosting

Introduction to Boosting Inroducon o Boosng Cynha Rudn PACM, Prnceon Unversy Advsors Ingrd Daubeches and Rober Schapre Say you have a daabase of news arcles, +, +, -, -, +, +, -, -, +, +, -, -, +, +, -, + where arcles are labeled

More information

Variants of Pegasos. December 11, 2009

Variants of Pegasos. December 11, 2009 Inroducon Varans of Pegasos SooWoong Ryu bshboy@sanford.edu December, 009 Youngsoo Cho yc344@sanford.edu Developng a new SVM algorhm s ongong research opc. Among many exng SVM algorhms, we wll focus on

More information

Clustering (Bishop ch 9)

Clustering (Bishop ch 9) Cluserng (Bshop ch 9) Reference: Daa Mnng by Margare Dunham (a slde source) 1 Cluserng Cluserng s unsupervsed learnng, here are no class labels Wan o fnd groups of smlar nsances Ofen use a dsance measure

More information

Robust and Accurate Cancer Classification with Gene Expression Profiling

Robust and Accurate Cancer Classification with Gene Expression Profiling Robus and Accurae Cancer Classfcaon wh Gene Expresson Proflng (Compuaonal ysems Bology, 2005) Auhor: Hafeng L, Keshu Zhang, ao Jang Oulne Background LDA (lnear dscrmnan analyss) and small sample sze problem

More information

An introduction to Support Vector Machine

An introduction to Support Vector Machine An nroducon o Suppor Vecor Machne 報告者 : 黃立德 References: Smon Haykn, "Neural Neworks: a comprehensve foundaon, second edon, 999, Chaper 2,6 Nello Chrsann, John Shawe-Tayer, An Inroducon o Suppor Vecor Machnes,

More information

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model Probablsc Model for Tme-seres Daa: Hdden Markov Model Hrosh Mamsuka Bonformacs Cener Kyoo Unversy Oulne Three Problems for probablsc models n machne learnng. Compung lkelhood 2. Learnng 3. Parsng (predcon

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4 CS434a/54a: Paern Recognon Prof. Olga Veksler Lecure 4 Oulne Normal Random Varable Properes Dscrmnan funcons Why Normal Random Varables? Analycally racable Works well when observaon comes form a corruped

More information

CHAPTER 10: LINEAR DISCRIMINATION

CHAPTER 10: LINEAR DISCRIMINATION CHAPER : LINEAR DISCRIMINAION Dscrmnan-based Classfcaon 3 In classfcaon h K classes (C,C,, C k ) We defned dscrmnan funcon g j (), j=,,,k hen gven an es eample, e chose (predced) s class label as C f g

More information

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!) i+1,q - [(! ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL The frs hng o es n wo-way ANOVA: Is here neracon? "No neracon" means: The man effecs model would f. Ths n urn means: In he neracon plo (wh A on he horzonal

More information

Robustness Experiments with Two Variance Components

Robustness Experiments with Two Variance Components Naonal Insue of Sandards and Technology (NIST) Informaon Technology Laboraory (ITL) Sascal Engneerng Dvson (SED) Robusness Expermens wh Two Varance Componens by Ana Ivelsse Avlés avles@ns.gov Conference

More information

Lecture 6: Learning for Control (Generalised Linear Regression)

Lecture 6: Learning for Control (Generalised Linear Regression) Lecure 6: Learnng for Conrol (Generalsed Lnear Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure 6: RLSC - Prof. Sehu Vjayakumar Lnear Regresson

More information

Time-interval analysis of β decay. V. Horvat and J. C. Hardy

Time-interval analysis of β decay. V. Horvat and J. C. Hardy Tme-nerval analyss of β decay V. Horva and J. C. Hardy Work on he even analyss of β decay [1] connued and resuled n he developmen of a novel mehod of bea-decay me-nerval analyss ha produces hghly accurae

More information

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance INF 43 3.. Repeon Anne Solberg (anne@f.uo.no Bayes rule for a classfcaon problem Suppose we have J, =,...J classes. s he class label for a pxel, and x s he observed feaure vecor. We can use Bayes rule

More information

Lecture 11 SVM cont

Lecture 11 SVM cont Lecure SVM con. 0 008 Wha we have done so far We have esalshed ha we wan o fnd a lnear decson oundary whose margn s he larges We know how o measure he margn of a lnear decson oundary Tha s: he mnmum geomerc

More information

Solution in semi infinite diffusion couples (error function analysis)

Solution in semi infinite diffusion couples (error function analysis) Soluon n sem nfne dffuson couples (error funcon analyss) Le us consder now he sem nfne dffuson couple of wo blocks wh concenraon of and I means ha, n a A- bnary sysem, s bondng beween wo blocks made of

More information

Department of Economics University of Toronto

Department of Economics University of Toronto Deparmen of Economcs Unversy of Torono ECO408F M.A. Economercs Lecure Noes on Heeroskedascy Heeroskedascy o Ths lecure nvolves lookng a modfcaons we need o make o deal wh he regresson model when some of

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Lnear Response Theory: The connecon beween QFT and expermens 3.1. Basc conceps and deas Q: ow do we measure he conducvy of a meal? A: we frs nroduce a weak elecrc feld E, and hen measure

More information

Boosted LMS-based Piecewise Linear Adaptive Filters

Boosted LMS-based Piecewise Linear Adaptive Filters 016 4h European Sgnal Processng Conference EUSIPCO) Boosed LMS-based Pecewse Lnear Adapve Flers Darush Kar and Iman Marvan Deparmen of Elecrcal and Elecroncs Engneerng Blken Unversy, Ankara, Turkey {kar,

More information

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005 Dynamc Team Decson Theory EECS 558 Proec Shruvandana Sharma and Davd Shuman December 0, 005 Oulne Inroducon o Team Decson Theory Decomposon of he Dynamc Team Decson Problem Equvalence of Sac and Dynamc

More information

WiH Wei He

WiH Wei He Sysem Idenfcaon of onlnear Sae-Space Space Baery odels WH We He wehe@calce.umd.edu Advsor: Dr. Chaochao Chen Deparmen of echancal Engneerng Unversy of aryland, College Par 1 Unversy of aryland Bacground

More information

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS R&RATA # Vol.) 8, March FURTHER AALYSIS OF COFIDECE ITERVALS FOR LARGE CLIET/SERVER COMPUTER ETWORKS Vyacheslav Abramov School of Mahemacal Scences, Monash Unversy, Buldng 8, Level 4, Clayon Campus, Wellngon

More information

FTCS Solution to the Heat Equation

FTCS Solution to the Heat Equation FTCS Soluon o he Hea Equaon ME 448/548 Noes Gerald Reckenwald Porland Sae Unversy Deparmen of Mechancal Engneerng gerry@pdxedu ME 448/548: FTCS Soluon o he Hea Equaon Overvew Use he forward fne d erence

More information

Fall 2010 Graduate Course on Dynamic Learning

Fall 2010 Graduate Course on Dynamic Learning Fall 200 Graduae Course on Dynamc Learnng Chaper 4: Parcle Flers Sepember 27, 200 Byoung-Tak Zhang School of Compuer Scence and Engneerng & Cognve Scence and Bran Scence Programs Seoul aonal Unversy hp://b.snu.ac.kr/~bzhang/

More information

Lecture VI Regression

Lecture VI Regression Lecure VI Regresson (Lnear Mehods for Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure VI: MLSC - Dr. Sehu Vjayakumar Lnear Regresson Model M

More information

Cubic Bezier Homotopy Function for Solving Exponential Equations

Cubic Bezier Homotopy Function for Solving Exponential Equations Penerb Journal of Advanced Research n Compung and Applcaons ISSN (onlne: 46-97 Vol. 4, No.. Pages -8, 6 omoopy Funcon for Solvng Eponenal Equaons S. S. Raml *,,. Mohamad Nor,a, N. S. Saharzan,b and M.

More information

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6)

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6) Econ7 Appled Economercs Topc 5: Specfcaon: Choosng Independen Varables (Sudenmund, Chaper 6 Specfcaon errors ha we wll deal wh: wrong ndependen varable; wrong funconal form. Ths lecure deals wh wrong ndependen

More information

TSS = SST + SSE An orthogonal partition of the total SS

TSS = SST + SSE An orthogonal partition of the total SS ANOVA: Topc 4. Orhogonal conrass [ST&D p. 183] H 0 : µ 1 = µ =... = µ H 1 : The mean of a leas one reamen group s dfferen To es hs hypohess, a basc ANOVA allocaes he varaon among reamen means (SST) equally

More information

Machine Learning 2nd Edition

Machine Learning 2nd Edition INTRODUCTION TO Lecure Sldes for Machne Learnng nd Edon ETHEM ALPAYDIN, modfed by Leonardo Bobadlla and some pars from hp://www.cs.au.ac.l/~aparzn/machnelearnng/ The MIT Press, 00 alpaydn@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/mle

More information

CHAPTER 2: Supervised Learning

CHAPTER 2: Supervised Learning HATER 2: Supervsed Learnng Learnng a lass from Eamples lass of a famly car redcon: Is car a famly car? Knowledge eracon: Wha do people epec from a famly car? Oupu: osve (+) and negave ( ) eamples Inpu

More information

5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015)

5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015) 5h Inernaonal onference on Advanced Desgn and Manufacurng Engneerng (IADME 5 The Falure Rae Expermenal Sudy of Specal N Machne Tool hunshan He, a, *, La Pan,b and Bng Hu 3,c,,3 ollege of Mechancal and

More information

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method 10 h US Naonal Congress on Compuaonal Mechancs Columbus, Oho 16-19, 2009 Sngle-loop Sysem Relably-Based Desgn & Topology Opmzaon (SRBDO/SRBTO): A Marx-based Sysem Relably (MSR) Mehod Tam Nguyen, Junho

More information

Math 128b Project. Jude Yuen

Math 128b Project. Jude Yuen Mah 8b Proec Jude Yuen . Inroducon Le { Z } be a sequence of observed ndependen vecor varables. If he elemens of Z have a on normal dsrbuon hen { Z } has a mean vecor Z and a varancecovarance marx z. Geomercally

More information

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms Course organzaon Inroducon Wee -2) Course nroducon A bref nroducon o molecular bology A bref nroducon o sequence comparson Par I: Algorhms for Sequence Analyss Wee 3-8) Chaper -3, Models and heores» Probably

More information

On One Analytic Method of. Constructing Program Controls

On One Analytic Method of. Constructing Program Controls Appled Mahemacal Scences, Vol. 9, 05, no. 8, 409-407 HIKARI Ld, www.m-hkar.com hp://dx.do.org/0.988/ams.05.54349 On One Analyc Mehod of Consrucng Program Conrols A. N. Kvko, S. V. Chsyakov and Yu. E. Balyna

More information

Graduate Macroeconomics 2 Problem set 5. - Solutions

Graduate Macroeconomics 2 Problem set 5. - Solutions Graduae Macroeconomcs 2 Problem se. - Soluons Queson 1 To answer hs queson we need he frms frs order condons and he equaon ha deermnes he number of frms n equlbrum. The frms frs order condons are: F K

More information

Computing Relevance, Similarity: The Vector Space Model

Computing Relevance, Similarity: The Vector Space Model Compung Relevance, Smlary: The Vecor Space Model Based on Larson and Hears s sldes a UC-Bereley hp://.sms.bereley.edu/courses/s0/f00/ aabase Managemen Sysems, R. Ramarshnan ocumen Vecors v ocumens are

More information

January Examinations 2012

January Examinations 2012 Page of 5 EC79 January Examnaons No. of Pages: 5 No. of Quesons: 8 Subjec ECONOMICS (POSTGRADUATE) Tle of Paper EC79 QUANTITATIVE METHODS FOR BUSINESS AND FINANCE Tme Allowed Two Hours ( hours) Insrucons

More information

Lecture 2 L n i e n a e r a M od o e d l e s

Lecture 2 L n i e n a e r a M od o e d l e s Lecure Lnear Models Las lecure You have learned abou ha s machne learnng Supervsed learnng Unsupervsed learnng Renforcemen learnng You have seen an eample learnng problem and he general process ha one

More information

Machine Learning Linear Regression

Machine Learning Linear Regression Machne Learnng Lnear Regresson Lesson 3 Lnear Regresson Bascs of Regresson Leas Squares esmaon Polynomal Regresson Bass funcons Regresson model Regularzed Regresson Sascal Regresson Mamum Lkelhood (ML)

More information

Chapter 4. Neural Networks Based on Competition

Chapter 4. Neural Networks Based on Competition Chaper 4. Neural Neworks Based on Compeon Compeon s mporan for NN Compeon beween neurons has been observed n bologcal nerve sysems Compeon s mporan n solvng many problems To classfy an npu paern _1 no

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Ths documen s downloaded from DR-NTU, Nanyang Technologcal Unversy Lbrary, Sngapore. Tle A smplfed verb machng algorhm for word paron n vsual speech processng( Acceped verson ) Auhor(s) Foo, Say We; Yong,

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machne Learnng & Percepon Insrucor: Tony Jebara SVM Feaure & Kernel Selecon SVM Eensons Feaure Selecon (Flerng and Wrappng) SVM Feaure Selecon SVM Kernel Selecon SVM Eensons Classfcaon Feaure/Kernel

More information

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany Herarchcal Markov Normal Mxure models wh Applcaons o Fnancal Asse Reurns Appendx: Proofs of Theorems and Condonal Poseror Dsrbuons John Geweke a and Gann Amsano b a Deparmens of Economcs and Sascs, Unversy

More information

( ) () we define the interaction representation by the unitary transformation () = ()

( ) () we define the interaction representation by the unitary transformation () = () Hgher Order Perurbaon Theory Mchael Fowler 3/7/6 The neracon Represenaon Recall ha n he frs par of hs course sequence, we dscussed he chrödnger and Hesenberg represenaons of quanum mechancs here n he chrödnger

More information

Attribute Reduction Algorithm Based on Discernibility Matrix with Algebraic Method GAO Jing1,a, Ma Hui1, Han Zhidong2,b

Attribute Reduction Algorithm Based on Discernibility Matrix with Algebraic Method GAO Jing1,a, Ma Hui1, Han Zhidong2,b Inernaonal Indusral Informacs and Compuer Engneerng Conference (IIICEC 05) Arbue educon Algorhm Based on Dscernbly Marx wh Algebrac Mehod GAO Jng,a, Ma Hu, Han Zhdong,b Informaon School, Capal Unversy

More information

Comparison of Supervised & Unsupervised Learning in βs Estimation between Stocks and the S&P500

Comparison of Supervised & Unsupervised Learning in βs Estimation between Stocks and the S&P500 Comparson of Supervsed & Unsupervsed Learnng n βs Esmaon beween Socks and he S&P500 J. We, Y. Hassd, J. Edery, A. Becker, Sanford Unversy T I. INTRODUCTION HE goal of our proec s o analyze he relaonshps

More information

Hidden Markov Models

Hidden Markov Models 11-755 Machne Learnng for Sgnal Processng Hdden Markov Models Class 15. 12 Oc 2010 1 Admnsrva HW2 due Tuesday Is everyone on he projecs page? Where are your projec proposals? 2 Recap: Wha s an HMM Probablsc

More information

( ) [ ] MAP Decision Rule

( ) [ ] MAP Decision Rule Announcemens Bayes Decson Theory wh Normal Dsrbuons HW0 due oday HW o be assgned soon Proec descrpon posed Bomercs CSE 90 Lecure 4 CSE90, Sprng 04 CSE90, Sprng 04 Key Probables 4 ω class label X feaure

More information

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s Ordnary Dfferenal Equaons n Neuroscence wh Malab eamples. Am - Gan undersandng of how o se up and solve ODE s Am Undersand how o se up an solve a smple eample of he Hebb rule n D Our goal a end of class

More information

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Labs. 180 Park Avenue. Florham Park, NJ 07932

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Labs. 180 Park Avenue. Florham Park, NJ 07932 A decson-heorec generalzaon of on-lne learnng and an applcaon o boosng Yoav Freund Rober E. Schapre AT&T Labs 80 Park Avenue Florham Park, NJ 07932 fyoav, schapreg@research.a.com December 9, 996 Absrac

More information

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model BGC1: Survval and even hsory analyss Oslo, March-May 212 Monday May 7h and Tuesday May 8h The addve regresson model Ørnulf Borgan Deparmen of Mahemacs Unversy of Oslo Oulne of program: Recapulaon Counng

More information

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Bell Laboratories. 600 Mountain Avenue

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Bell Laboratories. 600 Mountain Avenue A decson-heorec generalzaon of on-lne learnng and an applcaon o boosng Yoav Freund Rober E. Schapre AT&T Bell Laboraores 600 Mounan Avenue Room f2b-428, 2A-424g Murray Hll, NJ 07974-0636 fyoav, schapreg@research.a.com

More information

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim Korean J. Mah. 19 (2011), No. 3, pp. 263 272 GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS Youngwoo Ahn and Kae Km Absrac. In he paper [1], an explc correspondence beween ceran

More information

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study)

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study) Inernaonal Mahemacal Forum, Vol. 8, 3, no., 7 - HIKARI Ld, www.m-hkar.com hp://dx.do.org/.988/mf.3.3488 New M-Esmaor Objecve Funcon n Smulaneous Equaons Model (A Comparave Sudy) Ahmed H. Youssef Professor

More information

CS286.2 Lecture 14: Quantum de Finetti Theorems II

CS286.2 Lecture 14: Quantum de Finetti Theorems II CS286.2 Lecure 14: Quanum de Fne Theorems II Scrbe: Mara Okounkova 1 Saemen of he heorem Recall he las saemen of he quanum de Fne heorem from he prevous lecure. Theorem 1 Quanum de Fne). Le ρ Dens C 2

More information

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment EEL 6266 Power Sysem Operaon and Conrol Chaper 5 Un Commmen Dynamc programmng chef advanage over enumeraon schemes s he reducon n he dmensonaly of he problem n a src prory order scheme, here are only N

More information

Comb Filters. Comb Filters

Comb Filters. Comb Filters The smple flers dscussed so far are characered eher by a sngle passband and/or a sngle sopband There are applcaons where flers wh mulple passbands and sopbands are requred Thecomb fler s an example of

More information

Hidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University

Hidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University Hdden Markov Models Followng a lecure by Andrew W. Moore Carnege Mellon Unversy www.cs.cmu.edu/~awm/uorals A Markov Sysem Has N saes, called s, s 2.. s N s 2 There are dscree meseps, 0,, s s 3 N 3 0 Hdden

More information

A Novel Iron Loss Reduction Technique for Distribution Transformers. Based on a Combined Genetic Algorithm - Neural Network Approach

A Novel Iron Loss Reduction Technique for Distribution Transformers. Based on a Combined Genetic Algorithm - Neural Network Approach A Novel Iron Loss Reducon Technque for Dsrbuon Transformers Based on a Combned Genec Algorhm - Neural Nework Approach Palvos S. Georglaks Nkolaos D. Doulams Anasasos D. Doulams Nkos D. Hazargyrou and Sefanos

More information

Anisotropic Behaviors and Its Application on Sheet Metal Stamping Processes

Anisotropic Behaviors and Its Application on Sheet Metal Stamping Processes Ansoropc Behavors and Is Applcaon on Shee Meal Sampng Processes Welong Hu ETA-Engneerng Technology Assocaes, Inc. 33 E. Maple oad, Sue 00 Troy, MI 48083 USA 48-79-300 whu@ea.com Jeanne He ETA-Engneerng

More information

Tight results for Next Fit and Worst Fit with resource augmentation

Tight results for Next Fit and Worst Fit with resource augmentation Tgh resuls for Nex F and Wors F wh resource augmenaon Joan Boyar Leah Epsen Asaf Levn Asrac I s well known ha he wo smple algorhms for he classc n packng prolem, NF and WF oh have an approxmaon rao of

More information

An Effective TCM-KNN Scheme for High-Speed Network Anomaly Detection

An Effective TCM-KNN Scheme for High-Speed Network Anomaly Detection Vol. 24, November,, 200 An Effecve TCM-KNN Scheme for Hgh-Speed Nework Anomaly eecon Yang L Chnese Academy of Scences, Bejng Chna, 00080 lyang@sofware.c.ac.cn Absrac. Nework anomaly deecon has been a ho

More information

General Weighted Majority, Online Learning as Online Optimization

General Weighted Majority, Online Learning as Online Optimization Sascal Technques n Robocs (16-831, F10) Lecure#10 (Thursday Sepember 23) General Weghed Majory, Onlne Learnng as Onlne Opmzaon Lecurer: Drew Bagnell Scrbe: Nahanel Barshay 1 1 Generalzed Weghed majory

More information

ISSN MIT Publications

ISSN MIT Publications MIT Inernaonal Journal of Elecrcal and Insrumenaon Engneerng Vol. 1, No. 2, Aug 2011, pp 93-98 93 ISSN 2230-7656 MIT Publcaons A New Approach for Solvng Economc Load Dspach Problem Ansh Ahmad Dep. of Elecrcal

More information

Genetic Algorithm in Parameter Estimation of Nonlinear Dynamic Systems

Genetic Algorithm in Parameter Estimation of Nonlinear Dynamic Systems Genec Algorhm n Parameer Esmaon of Nonlnear Dynamc Sysems E. Paeraks manos@egnaa.ee.auh.gr V. Perds perds@vergna.eng.auh.gr Ah. ehagas kehagas@egnaa.ee.auh.gr hp://skron.conrol.ee.auh.gr/kehagas/ndex.hm

More information

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015 /4/ Learnng Objecves Self Organzaon Map Learnng whou Exaples. Inroducon. MAXNET 3. Cluserng 4. Feaure Map. Self-organzng Feaure Map 6. Concluson 38 Inroducon. Learnng whou exaples. Daa are npu o he syse

More information

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS THE PREICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS INTROUCTION The wo dmensonal paral dfferenal equaons of second order can be used for he smulaon of compeve envronmen n busness The arcle presens he

More information

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that THEORETICAL AUTOCORRELATIONS Cov( y, y ) E( y E( y))( y E( y)) ρ = = Var( y) E( y E( y)) =,, L ρ = and Cov( y, y ) s ofen denoed by whle Var( y ) f ofen denoed by γ. Noe ha γ = γ and ρ = ρ and because

More information

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes.

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes. umercal negraon of he dffuson equaon (I) Fne dfference mehod. Spaal screaon. Inernal nodes. R L V For hermal conducon le s dscree he spaal doman no small fne spans, =,,: Balance of parcles for an nernal

More information

Improved Classification Based on Predictive Association Rules

Improved Classification Based on Predictive Association Rules Proceedngs of he 009 IEEE Inernaonal Conference on Sysems, Man, and Cybernecs San Anono, TX, USA - Ocober 009 Improved Classfcaon Based on Predcve Assocaon Rules Zhxn Hao, Xuan Wang, Ln Yao, Yaoyun Zhang

More information

On computing differential transform of nonlinear non-autonomous functions and its applications

On computing differential transform of nonlinear non-autonomous functions and its applications On compung dfferenal ransform of nonlnear non-auonomous funcons and s applcaons Essam. R. El-Zahar, and Abdelhalm Ebad Deparmen of Mahemacs, Faculy of Scences and Humanes, Prnce Saam Bn Abdulazz Unversy,

More information

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION INTERNATIONAL TRADE T. J. KEHOE UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 27 EXAMINATION Please answer wo of he hree quesons. You can consul class noes, workng papers, and arcles whle you are workng on he

More information

Volatility Interpolation

Volatility Interpolation Volaly Inerpolaon Prelmnary Verson March 00 Jesper Andreasen and Bran Huge Danse Mares, Copenhagen wan.daddy@danseban.com brno@danseban.com Elecronc copy avalable a: hp://ssrn.com/absrac=69497 Inro Local

More information

Transcription: Messenger RNA, mrna, is produced and transported to Ribosomes

Transcription: Messenger RNA, mrna, is produced and transported to Ribosomes Quanave Cenral Dogma I Reference hp//book.bonumbers.org Inaon ranscrpon RNA polymerase and ranscrpon Facor (F) s bnds o promoer regon of DNA ranscrpon Meenger RNA, mrna, s produced and ranspored o Rbosomes

More information

Pattern Classification (III) & Pattern Verification

Pattern Classification (III) & Pattern Verification Preare by Prof. Hu Jang CSE638 --4 CSE638 3. Seech & Language Processng o.5 Paern Classfcaon III & Paern Verfcaon Prof. Hu Jang Dearmen of Comuer Scence an Engneerng York Unversy Moel Parameer Esmaon Maxmum

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Constrained-Storage Variable-Branch Neural Tree for. Classification

Constrained-Storage Variable-Branch Neural Tree for. Classification Consraned-Sorage Varable-Branch Neural Tree for Classfcaon Shueng-Ben Yang Deparmen of Dgal Conen of Applcaon and Managemen Wenzao Ursulne Unversy of Languages 900 Mnsu s oad Kaohsng 807, Tawan. Tel :

More information

Epistemic Game Theory: Online Appendix

Epistemic Game Theory: Online Appendix Epsemc Game Theory: Onlne Appendx Edde Dekel Lucano Pomao Marcano Snscalch July 18, 2014 Prelmnares Fx a fne ype srucure T I, S, T, β I and a probably µ S T. Le T µ I, S, T µ, βµ I be a ype srucure ha

More information

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy Arcle Inernaonal Journal of Modern Mahemacal Scences, 4, (): - Inernaonal Journal of Modern Mahemacal Scences Journal homepage: www.modernscenfcpress.com/journals/jmms.aspx ISSN: 66-86X Florda, USA Approxmae

More information

Efficient Asynchronous Channel Hopping Design for Cognitive Radio Networks

Efficient Asynchronous Channel Hopping Design for Cognitive Radio Networks Effcen Asynchronous Channel Hoppng Desgn for Cognve Rado Neworks Chh-Mn Chao, Chen-Yu Hsu, and Yun-ng Lng Absrac In a cognve rado nework (CRN), a necessary condon for nodes o communcae wh each oher s ha

More information

2.1 Constitutive Theory

2.1 Constitutive Theory Secon.. Consuve Theory.. Consuve Equaons Governng Equaons The equaons governng he behavour of maerals are (n he spaal form) dρ v & ρ + ρdv v = + ρ = Conservaon of Mass (..a) d x σ j dv dvσ + b = ρ v& +

More information

A Novel Object Detection Method Using Gaussian Mixture Codebook Model of RGB-D Information

A Novel Object Detection Method Using Gaussian Mixture Codebook Model of RGB-D Information A Novel Objec Deecon Mehod Usng Gaussan Mxure Codebook Model of RGB-D Informaon Lujang LIU 1, Gaopeng ZHAO *,1, Yumng BO 1 1 School of Auomaon, Nanjng Unversy of Scence and Technology, Nanjng, Jangsu 10094,

More information

Forecasting Using First-Order Difference of Time Series and Bagging of Competitive Associative Nets

Forecasting Using First-Order Difference of Time Series and Bagging of Competitive Associative Nets Forecasng Usng Frs-Order Dfference of Tme Seres and Baggng of Compeve Assocave Nes Shuch Kurog, Ryohe Koyama, Shnya Tanaka, and Toshhsa Sanuk Absrac Ths arcle descrbes our mehod used for he 2007 Forecasng

More information

Normal Random Variable and its discriminant functions

Normal Random Variable and its discriminant functions Noral Rando Varable and s dscrnan funcons Oulne Noral Rando Varable Properes Dscrnan funcons Why Noral Rando Varables? Analycally racable Works well when observaon coes for a corruped snle prooype 3 The

More information

Anomaly Detection. Lecture Notes for Chapter 9. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Anomaly Detection. Lecture Notes for Chapter 9. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar Anomaly eecon Lecure Noes for Chaper 9 Inroducon o aa Mnng, 2 nd Edon by Tan, Senbach, Karpane, Kumar 2/14/18 Inroducon o aa Mnng, 2nd Edon 1 Anomaly/Ouler eecon Wha are anomales/oulers? The se of daa

More information

Polymerization Technology Laboratory Course

Polymerization Technology Laboratory Course Prakkum Polymer Scence/Polymersaonsechnk Versuch Resdence Tme Dsrbuon Polymerzaon Technology Laboraory Course Resdence Tme Dsrbuon of Chemcal Reacors If molecules or elemens of a flud are akng dfferen

More information

Sampling Procedure of the Sum of two Binary Markov Process Realizations

Sampling Procedure of the Sum of two Binary Markov Process Realizations Samplng Procedure of he Sum of wo Bnary Markov Process Realzaons YURY GORITSKIY Dep. of Mahemacal Modelng of Moscow Power Insue (Techncal Unversy), Moscow, RUSSIA, E-mal: gorsky@yandex.ru VLADIMIR KAZAKOV

More information

12d Model. Civil and Surveying Software. Drainage Analysis Module Detention/Retention Basins. Owen Thornton BE (Mech), 12d Model Programmer

12d Model. Civil and Surveying Software. Drainage Analysis Module Detention/Retention Basins. Owen Thornton BE (Mech), 12d Model Programmer d Model Cvl and Surveyng Soware Dranage Analyss Module Deenon/Reenon Basns Owen Thornon BE (Mech), d Model Programmer owen.hornon@d.com 4 January 007 Revsed: 04 Aprl 007 9 February 008 (8Cp) Ths documen

More information

Appendix to Online Clustering with Experts

Appendix to Online Clustering with Experts A Appendx o Onlne Cluserng wh Expers Furher dscusson of expermens. Here we furher dscuss expermenal resuls repored n he paper. Ineresngly, we observe ha OCE (and n parcular Learn- ) racks he bes exper

More information

Neural Networks-Based Time Series Prediction Using Long and Short Term Dependence in the Learning Process

Neural Networks-Based Time Series Prediction Using Long and Short Term Dependence in the Learning Process Neural Neworks-Based Tme Seres Predcon Usng Long and Shor Term Dependence n he Learnng Process J. Puchea, D. Paño and B. Kuchen, Absrac In hs work a feedforward neural neworksbased nonlnear auoregresson

More information

Reactive Methods to Solve the Berth AllocationProblem with Stochastic Arrival and Handling Times

Reactive Methods to Solve the Berth AllocationProblem with Stochastic Arrival and Handling Times Reacve Mehods o Solve he Berh AllocaonProblem wh Sochasc Arrval and Handlng Tmes Nsh Umang* Mchel Berlare* * TRANSP-OR, Ecole Polyechnque Fédérale de Lausanne Frs Workshop on Large Scale Opmzaon November

More information

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL Sco Wsdom, John Hershey 2, Jonahan Le Roux 2, and Shnj Waanabe 2 Deparmen o Elecrcal Engneerng, Unversy o Washngon, Seale, WA, USA

More information

APOC #232 Capacity Planning for Fault-Tolerant All-Optical Network

APOC #232 Capacity Planning for Fault-Tolerant All-Optical Network APOC #232 Capacy Plannng for Faul-Toleran All-Opcal Nework Mchael Kwok-Shng Ho and Kwok-wa Cheung Deparmen of Informaon ngneerng The Chnese Unversy of Hong Kong Shan, N.T., Hong Kong SAR, Chna -mal: kwcheung@e.cuhk.edu.hk

More information

Testing a new idea to solve the P = NP problem with mathematical induction

Testing a new idea to solve the P = NP problem with mathematical induction Tesng a new dea o solve he P = NP problem wh mahemacal nducon Bacground P and NP are wo classes (ses) of languages n Compuer Scence An open problem s wheher P = NP Ths paper ess a new dea o compare he

More information

CS 536: Machine Learning. Nonparametric Density Estimation Unsupervised Learning - Clustering

CS 536: Machine Learning. Nonparametric Density Estimation Unsupervised Learning - Clustering CS 536: Machne Learnng Nonparamerc Densy Esmaon Unsupervsed Learnng - Cluserng Fall 2005 Ahmed Elgammal Dep of Compuer Scence Rugers Unversy CS 536 Densy Esmaon - Cluserng - 1 Oulnes Densy esmaon Nonparamerc

More information

Performance Analysis for a Network having Standby Redundant Unit with Waiting in Repair

Performance Analysis for a Network having Standby Redundant Unit with Waiting in Repair TECHNI Inernaonal Journal of Compung Scence Communcaon Technologes VOL.5 NO. July 22 (ISSN 974-3375 erformance nalyss for a Nework havng Sby edundan Un wh ang n epar Jendra Sngh 2 abns orwal 2 Deparmen

More information

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys Dual Approxmae Dynamc Programmng for Large Scale Hydro Valleys Perre Carpener and Jean-Phlppe Chanceler 1 ENSTA ParsTech and ENPC ParsTech CMM Workshop, January 2016 1 Jon work wh J.-C. Alas, suppored

More information

Bayesian Inference of the GARCH model with Rational Errors

Bayesian Inference of the GARCH model with Rational Errors 0 Inernaonal Conference on Economcs, Busness and Markeng Managemen IPEDR vol.9 (0) (0) IACSIT Press, Sngapore Bayesan Inference of he GARCH model wh Raonal Errors Tesuya Takash + and Tng Tng Chen Hroshma

More information