Reinforcement Learning in Associative Memory
|
|
- Osborne Barber
- 6 years ago
- Views:
Transcription
1 Renforcemen Learnng n Assocave Memory Shaojuan Zhu and Dan Hammersrom Cener for Bologcally Inspred Informaon Engneerng, ECE Deparmen OGI School of Scence and Engneerng a Oregon Healh & Scence Unversy {zhusj, srom}@ece.og.edu Absr- A renforcemen learnng based assocave memory srucure (RLAM) s proposed. In hs srucure, a one-layer feed forward Palm [] model s appled o he neworks. Insead of bach ranng, an on-lne learnng mehod s used o consruc he memory. The neworks are raned nervely accordng o renforcemen learnng, whch s bologcally plausble. The epermen resuls show ha he neworks converge and generalze well. I. INTRODUCTION Assocave memory appears o be one of he mos mporan funcons n many cognve processes, such as recognon, predcon, plannng, ec. In a saccade, for eample, he sensory nformaon obaned by an eye s ransformed no movemen nformaon, whch conrols he eyeball muscles o mplemen he saccade. The sysem ha deermnes he parameers of he saccade can be modeled as an assocave memory ha s raned o assocae ceran useful npu/oupu mappngs by prevous eperences. Many bran srucures can ofen be modeled as assocave memores. Assocave memory s ually a sysem ha sores mappngs from npu paerns o oupu paerns, so ha when one s encouner a he npu, he oher can be effcenly recalled. More mporanly, when a paern s n he neghborhood of he npu paern beng sored, an oupu paern correspondng o ha npu can be assocavely recalled, even f he npu paern s ncomplee or nosy. In heero-assocaon, he npu and oupu paern are dfferen. Many cognve processes mplemen heeroassocaon, such as ransformng sensory nformaon no muscle commands. In auo-assocaon, he npu and oupu vecors are n he same space, n hs case, paern compleon occurs. The Palm nework [, 2] s one of he useful assocave nework models. I s smlar o he nework developed by llshaw s group[3], and has large nformaon capacy and converges relably. e have done smulaons of large neworks on boh PC clusers and NASA s super-compuers [4]. The resuls showed ha he Palm neworks are robus and scale reasonably well. However, n hese models he mappngs were no raned nervely. All he npu and oupu paerns were randomly generaed a one me, and a wegh mar s compued from he ranng se. In real applcaons, especally for a bologcally nspred sysem, ncremenal learnng va neron wh a real envronmen s very mporan. One mporan famly of algorhms, renforcemen learnng [5], has been used by sysems ha learn from neron wh her envronmen. The essence of renforcemen learnng s ha s ral-and-error learnng, where here s no supervsory feedback provded o he sysem, only a sngle reward sgnal s avalable. The RLAM sysem dscussed s par of a larger roboc sysem ha we are developng for NASA. Ths frs verson of RLAM wll be used as a very rough smulaon of he pareal core. I s beleved ha he pareal lobe s nvolved n vsual corcal processng by combnng eye poson sgnals wh renoopc vsual npu o generae headcenered represenaons of he vsual npu [6, 7]. Our goal for RLAM was o creae an assocave memory srucure ha duplcaes aspecs of pareal funcon by learnng mappngs wh a renforcemen based learnng mechansm. I s possble o mplemen renforcemen learnng va a smple look-up able, where he agen chooses an by searchng he look-up able. I s possble o buld localzed assocaon models ha are very smlar o a look-up able. However, for he ask ha has a very large sae space or on space, raversng every sae and on s no possble. A more general srucure s needed and preferably one ha allows reasonable generalzaon so ha he agen can make good use of s eperences. Ths requres an assocaon model ha uses dsrbued represenaons and sorage. In hs paper we nroduce a renforcemen learnng assocave nework. The nework s based on he Palm model, where a renforcemen learnng based Hebban learnng rule s used for he sorage process, and k-wnnerake-all algorhm s appled o mplemen he rereval process. II. PALM ASSOCIATE NETORKS In he Palm model, he se of mappngs o be sored s S = µ, y µ µ =,2,, M, where µ s he npu denoed as: { ( ) } paern, and y µ s he correspondng oupu paern. Boh µ and y µ are sparsely encoded bnary vecors. In he ranng procedure, each par of he mappng s presened o he nework. A clpped Hebban learnng rule s appled o generae he wegh mar: Bologcal compung for robo navgaon and conrol, NASA, PI: Marwan Jabr, Co-PIs: Chrs Assad, Dan Hammersrom, Msha Pavel, Terrence Sejnowsk, and Olver Coenen.
2 M T = [ y µ ( µ ) ], µ = where s Boolean OR operaon, and s ouer produc. In he recall procedure, a paern ˆ s presened o he nework npu. The npu ˆ can be he nosy versons of he ranng vecor. The oupu vecor ha s rereved by he nework wh wegh can be calculaed by: yˆ = f ( ˆ θ ), (2) where represens he nner produc, θ s a global hreshold and f () s a bnary-valued ransfer funcon. To se he hreshold, he k wnners ake all rule (k-ta) was proposed by Palm. k s he number of ve nodes n an oupu vecor. All he oupu neurons have o compee for hemselves so ha only k elemens n he oupu vecor are allowed o be. The hreshold s se adapvely o he value where only hose nodes ha have he k mamum values can be se as. By keepng k small, he npu and oupu vecors are hen sparsely encoded, n addon, he use of bnary synapc weghs add o he smplcy of he nework and do no affec recall performance. Palm shows ha he nformaon conen n bs per synapse s hgher han he radonal conen addressable memory, and he mamum nformaon capacy occurs when he wegh mar s half full. Anoher charersc of Palm nework s ha, he rereval procedure generally converges whn one or wo eraons. e have done smulaons of large neworks on our Beowulf cluser and he NASA super-compuers. Also, we have a spkng verson of he basc Palm nework. Palm neworks performance scales reasonably well o he nework sze. The assocave memory s faul oleran and can do bes mach recall such ha when a nosy or ncomplee npu s gven, he assocave memory wll assume he closes mach npu and produce he oupu vecor assocaed wh ha npu. In he case of auo-assocaon, he closes ranng vecor s reurned. The concep of bes-mach mples ha here s some merc defned over he vecor space he memory operaes n. For he purposes of hs paper we wll assume a basc Eucldean dsance. However, one can easly magne such neworks operang n envronmens where more comple represenaons would be requred. For some applcaons, f he npu and oupu mappngs are known, and boh npu and oupu paerns can be represened by bnary sparse vecors, buldng a Palm nework s sraghforward. However, he Palm model we are usng does no perform nerve ranng. Ineron beween he sysem and s envronmen s crucal. A robo can perform some smple asks easly. Bu for more comple asks, where flebly or generalzaon s requred, nerve learnng s ofen he mos effcen approach. In hs case, can he robo adap o he unknown envronmen? Can learn new sklls o accomplsh a new ask? () Renforcemen learnng s a mechansm where a robo learns o know he envronmen and esablsh he mappngs, based on a smple reward sgnal. In hs paper we demonsrae a verson of Palm s nework ha s raned va renforcemen learnng, n whch he npu and oupu mappngs are esablshed ncremenally. e wll sll use he same Palm rereval algorhm. In he ne wo secons we wll frs eamne he ssue of ranng a Palm nework ncremenally, and second, he use of hs ncremenal updae model n renforcemen learnng. III. INCREMENTAL LEARNING In he orgnal Palm model, he assocave nework s consruced n a bach mode, where all he npu and oupu mappngs are used o generae he compleed wegh mar before any rereval s done. The nework weghs are only updaed afer all of he npu and oupu mappngs are presened. Bach learnng s also called off-lne ranng. Conrary o he off-lne ranng s he on-lne ranng, where learnng occurs ncremenally for each npu vecor presened o he nework. Incremenal learnng allows he assocaon memory o adap o he envronmen. For ncremenal learnng, he conrbuon of each npu/oupu par s summed up, and he adapaon s noed as: µ µ = R(, y ), (3) where R (, y ) s he local synapc rule ha deermnes he amoun of wegh change based on he assocaon of npu µ o oupu y µ. For bach learnng, f oo many vecors are sored, he nerference among vecors can become a severe problem, and somemes, he caasrophc forgeng occurs. Usng ncremenal learnng, he learnng sysem may predc s oupu based on s prevous eperences, and can adjus s parameers o be beer adapve o s envronmen. So he sysem may remember he newer paerns a he prce of forgeng he older ones. Bayesan Confdence Propagaon Neural Neworks (BCPNN), proposed by Anders Lansner [8], perform hs knd of ncremenal palmpses learnng. For he me beng we are assumng a saonary envronmen and are no consderng more comple BCPNN lke learnng rules a hs me. Afer bach learnng, he assocave memory appromaely parons he npu space no Vorono regons, each of whch corresponds o one npu/oupu vecor mappng. A Vorono regon of a vecor V s he unon of all vecors V o whch s he closes one: V = V : V V < V V, j, (4) { } j where V V s he Eucldean dsance. 2 In bach ranng, f all he npu paerns are equally lkely, afer ranng, all he ranng vecors wll have appromaely
3 equal wegh and he Vorono boundares are equdsan beween any par of neghborng vecors n he space. Snce he vecors are he same lengh, he Vorono regon ss on an N-dmensonal sphere. For ncremenal learnng o work, we have o nroduce he concep of weghed Vorono regons. Defnon: le S denoe a se of aror vecors such ha each vecor V has assgned a posve, fne nfluence ω ( ). In hs case, he dsance of an arbrary vecor V from V s gven by V V ( ω( ) / ω( )). The weghed Vorono regon for S s a subdvson of he npu space such ha each vecor V n S s assocaed wh a regon conssng of all vecors V n he regon for whch V s he neares vecor usng a weghed dsance merc. In ncremenal learnng, f a vecor V s presened frequenly, s nfluence ω ( ) on s neghbors wll become larger, and s correspondng Vorono regon wll ge larger. Our RLAM nework hen ulzes hs aspec of assocaon memory wh renforcemen learnng echnques o drve he ncremenal learnng. IV. REINFORCEMENT LEAERNING I s beleved ha many bologcal neural sysems perform some knd of renforcemen learnng [5]. The agen and he envronmen are he wo componens of he learnng sysem, and he learnng akes place va a seres of seps. A each sep, he agen akes an on, f he on s successful, he agen ges a reward, oherwse, no reward s gven. The agen s goal s o mamze he reward ges. Afer each sep, he agen updaes s oupu selecon polcy accordng o he reward, and wll ncremenally mprove a performng he ask. Afer eploaon and eploraon, he agen generally fnds an opmum polcy for ha ask so as o ge he mamum reward. One smple way o do renforcemen learnng s Q- learnng. Q learnng assumes ha here s some sysem ha can be n varous saes, S. Furhermore he agen can ake several ons, a, ha change he sae. The goal of Q- learnng s o develop a polcy ha generaes ceran ons n ceran saes ha mamzes he reward. In Q-learnng, Q s a, he value for he sae S and on a par, (, ) represens he agen s curren knowledge abou epeced long erm award for akng on a n sae S. For each sae, here may be several Q-values correspondng o each on, he agen wll generally ake he on ha has he mamum Q-value ( eploaon ). The goal of learnng s o reach a long-erm mamum reward from he envronmen. The updae of he Q-value s: Q( s, a ) Q( s, a ) + α[ r + γ ma Q( s, a) Q( s, a )]. (5) + + a Q-learnng s one of he mos popular forms of renforcemen learnng. However, may converge que slowly o a good polcy. The resul of he Q-learnng s essenally a look-up able such ha, for any sae, he correspondng possble ons and her relave value fucons are lsed. Snce s dffcul o eplore all possble ons from all possble saes (even once, le alone several mes), we wan our learnng sysem o be able o generalze effecvely, whch allows he effcen sorage of learned nformaon and broad ulzaon of he knowledge beween smlar saes o ons. Many researchers have been usng neural neworks o solve he generalzaon problem. For eample, he nodes n he nework can be raned o respond o a ceran se of saes, and he neworks can be raned o sore a varey of mappngs, ncludng: sae polcy, sae value funcons, sae+on reward, ec. As menoned before, an assocave memory sores npu/oupu mappngs. The renforcemen learnng s also a procedure of esablshng mappngs. e wll use he Q-learnng echnque n our ncremenal learnng Palm nework., whch mplemens he wo mappngs: S a and S Q. Boh neworks are onelayer assocaon neworks wh no hdden layers. V. THE LEARNING MODEL e choose he grd-world ask o compare he renforcemen learnng based Palm nework wh he able drven Q-learnng sysem. In our epermens, a grd-world of 5 5 s dscussed. Each grd corresponds o a sae, and n each sae, 4 dfferen ons are avalable: ake one sep n one of four drecons, norh, souh, eas or wes. Each on s a sep, and he sep lengh s se o. The agen sars a a random poson n he grd, and he ask n our grd-world epermen s o fnd he goal ha s locaed a he cener of he grd. Ths ask can be eended o a more general ask n whch he agen sars a a random grd pon and he goal s also locaed a a random pon n he grd. Anoher varaon s o add barrers whch mus be raversed (lke maze learnng). For he more general ask, we only need o map he curren grd-world sae space o a larger sae space, where he relave poson beween he goal and he agen s ncluded. In he more general grd-world sae space, he agen s poson s always cenered a he sae space. e have epermened on he general grd-world ask, and movng he smaller space o a larger space s farly sraghforward, so o smplfy he dscusson, we only dscuss he grd-world ask where he goal s always se a he cener of he grd 2. 2 The grd world eample, hough very smplsc, fs he general ask of learnng o perform vsual coordnaon as requred by our robo sysem.
4 In he ask, when he agen fnds he goal, a new epoch begns and he agen s sared a a new random sae. The reward s eher or 0, correspondng o fndng or mssng he goal respecvely. The saes are represened by a, 5 5, wo-dmensonal array of neurons. For each sae ha s no a he edge of he grd, he 9 neurons n a 3 3 feld surroundng he neuron are fred. For some saes ha are a he edge or he corner of he grd, a mnmum of 4 neurons may be vaed. e consruced wo neworks: and. mplemens he mappngs from he sae S o he on a, and mplemens he mappngs from he sae S o he value funcon Q. The npus o boh neworks are he sae. The oupu of he nework s he on vecor, whch has only 4 elemens, correspondng o he 4 drecons an on may ake. The on vecor can have only ve elemen. The oupu from he nework s he Q-value represened by he elemen s value. s nalzed o random values, and all he elemens n are nalzed wh a very small posve value 0. o se up connecons beween all he saes and he oupu value neuron. The on vecor s calculaed by equaon (6), whch s he same rule used n he Palm rereval process. In he on vecor, here s always one and only one neuron ha s ve. If, by equaon (6), here s more han one ve neuron, he agen randomly chooses one on from all he possble opons for ha sae. Ths means ha more han one on s possble for ha sae. I also njecs a ceran degree of randomness no he on selecon whch s necessary for he nework o adequaely eplore s envronmen ( eploraon ). The wegh adapaon for and he wegh adapaon for s shown n equaon (2), s shown n equaon (3). In he equaons, represens he nner produc: a f ( S θ ) = (6) (7) ( S, a ) S + Q = S (8) crc Q = S (9) + crc + * γ + Q = r + Q (0) ε = Q Q () * = ε S (2) (, ) crc = αε S crc crc j (3), j In he above algorhm, a me sep, he agen s a sae S. The on a s compued from and he agen akes he on, enerng a new sae S +. The value funcon for sae S and S + s obaned from equaons (8) and (9) respecvely. The error s calculaed by equaon (), whch n urn conrols he adapaon of wegh and. Afer learnng, boh and are normalzed by he use of he Sgmod funcon, so he wegh values are consraned o be beween 0 and. Ths forces a slgh nhbory effec on he wegh values. The weghs hen requre mulple bs o represen. e have no ye eplored he mnmum precson requred for effecve ncremenal learnng. I also reduces he effec ha Hebban learnng for connually ncreasng he weghs ndefnely. Snce we are assumng a sysem wh saonary probablsc behavor, whch s manfes by he goal and s reward sgnal beng fed n me, no addonal subrve or decay mechansm s requred. The values can also be hough of as represenng he confdence or probably of he srengh beween he npu and oupu neurons. VI. RESULTS e compared he resuls from he renforcemen learnng based assocave memory wh ha from he radonal Q- learnng sysem. Usng he Renforcemen Learnng based Assocave Memory (RLAM) srucure, he learnng process converged more rapdly han he Q-learnng Look-up Table (QLT) sysem, and RLAM s ably o generalze was much beer han QLT. In he below ermnology, each on s a sep, and an epoch s he seres of seps ha he agen akes o accomplsh he ask. In Fg., afer 3000 ranng seps, he RLAM acheves a successful rereval rae of 98%, whch means ha, for 98% of all he grd posons, he agen can successfully accomplsh he ask. hle n he QLT sysem, afer 3000 seps, he sysem aans % successful rereval rae, only afer 40K seps of rals, can he agen acheve 97% successful rereval rae. From Table I, o acheve a successful rereval rae of 98%, he agen has been raned by 37 epochs, and no all he grd coordnaes were vsed by he agen. Bu he agen sll has learned enough knowledge o be able o erapolae successfully o he unvsed saes he agen has acqured he generalzaon ably, an mporan componen of an assocaon memory mplemenaon. In he QLT sysem, he agen was raned for 245 epochs, whch means ha, on average, each grd pon was vsed a leas 9 mes (245 (5 5)). Snce he look-up able does no generalze well, he agen has o raverse every sae several mes, unl has enough eperence abou he envronmen o operae successfully.
5 Successful Rereval Rae Number of Seps (K) 30 RLAM QLT Fg.. Rereval performance versus oal number of ranng seps TABLE I Number of Seps Number of Epochs Successful Rereval Rae RLAM 3K 37 98% QLT 40K % In he RLAM sysem, afer ranng, nework shows a herarchcal organzaon. e llusrae he wegh marces by gray-level mages wh 0 represenng black and represenng whe. In Fg 2, each mage corresponds o he wegh mar ha connecs o one of he four neurons: (a) llusraes he weghs o neuron whch s responsble for gong souh, (b) o neuron 2 for gong norh, (c) o neuron 3 for gong eas and (d) o neuron 4 for gong wes. e can see from Fg 2 ha each oupu neuron assocaes wh dfferen groups of saes. 50 compared wh he radonal Q-learnng sysem (QLT) on he speed and effcency of he learnng, and s generalzaon durng rereval. RLAM s more bologcal n ha, he sysem s adaped ncremenally by he neron beween he agen and he envronmen. Also, he memory of RLAM converges more quckly and generalzes beer han he QLT. I s very lkely ha we could acheve comparable resuls by usng more radonal error-drven neural nework models such as Back-Propagaon or SVMs. However, n our roboc sysem, we have a requremen for rapd learnng, whch he assocaon memory based sysem provdes. The grd problem shown here does no requre he memory o creae comple represenaons. The ne sep n hs work s o pu RLAM no a real applcaon wh real-world daa and assess s ably o creae he necessary represenaons under hese condons. Early resuls are encouragng. e use wo assocave memores o smulae wo mappngs. Boh neworks are one-layer nework, wh no hdden layer. Ths provdes a smple srucure for he assocave memory desgn. The RLAM proves o be sable, and he conjecure ha ncremenal learnng based Palm neworks appromaes he weghed Vorono regons works n he above smple eample. Bu heorecally, how he aror basn of he sysem s formed when more and more saes are raned s sll under research, and he ably of RLAM o represen he mappngs n a more comple problem mus be eamned. ACKNOLEDGEMENTS Ths work was suppored n par by NASA Conrs NCC and NCC REFERENCES (a) (b) (c) (d) Fg. 2. Pseudo gray-level mages of weghs correspondng o four oupu neurons, wh represenng whe and 0 represenng black. Each mage llusraes he synapc srengh of he wegh. Before ranng, he npu/oupu mappngs are random, and he weghs dsplay chaos pseudo mages. hen more ranng eraons ake place, he nework wll gradually ge self-organzed. In Fg. 2, when he agen lands a he op saes of he grd, he movng souh on wll domnae among he 4 neurons, so he agen wll move souh. If for some saes, here are more han wo neurons are promnen, he agen can randomly choose one drecon. For he fnal goal, here s no much dfference beween choosng whch on o ake, and n real world problem, here s usually more han one opon of comparable value. VII. CONCLUSION e have presened a renforcemen learnng based verson (RLAM) of he Palm assocave memory, and have [] Palm, G., On Assocave Memory. Bologcal Cybernecs, 980: p [2] Palm, G., e al., Neural Assocave Memores, n Assocave Processng and Processors, E.A.K.a.C.C. eems, Edor. 997, IEEE Compuer Socey: Los Alamos, CA. p [3] Buckngham, J. and D. llshaw, Performance Charerscs of he Assocave Ne. Nework: Compuaon n Neural Sysems, : p [4] Zhu, S. and D. Hammersrom. Smulaons of Assocave Neural Neworks. n ICONIP' [5] Suon, R.S. and A.G. Baro, Renforcemen Learnng: An Inroducon. 998, Cambrdge, MA: MIT Press. [6] Goodman, S.J. and R.A. Andersen. Algorhm programmed by a neural nework model for coordnae ransformaon. n IJCNN [7] Buneo, C.A., e al., Drec vsuomoor ransformaons for reachng. Naure, : p [8] Sandberg, A., e al., A palmpses memory based on an ncremenal Bayesan learnng rule. Neurocompung, : p
Chapter 4. Neural Networks Based on Competition
Chaper 4. Neural Neworks Based on Compeon Compeon s mporan for NN Compeon beween neurons has been observed n bologcal nerve sysems Compeon s mporan n solvng many problems To classfy an npu paern _1 no
More informationAn introduction to Support Vector Machine
An nroducon o Suppor Vecor Machne 報告者 : 黃立德 References: Smon Haykn, "Neural Neworks: a comprehensve foundaon, second edon, 999, Chaper 2,6 Nello Chrsann, John Shawe-Tayer, An Inroducon o Suppor Vecor Machnes,
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4
CS434a/54a: Paern Recognon Prof. Olga Veksler Lecure 4 Oulne Normal Random Varable Properes Dscrmnan funcons Why Normal Random Varables? Analycally racable Works well when observaon comes form a corruped
More informationV.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS
R&RATA # Vol.) 8, March FURTHER AALYSIS OF COFIDECE ITERVALS FOR LARGE CLIET/SERVER COMPUTER ETWORKS Vyacheslav Abramov School of Mahemacal Scences, Monash Unversy, Buldng 8, Level 4, Clayon Campus, Wellngon
More informationCHAPTER 10: LINEAR DISCRIMINATION
CHAPER : LINEAR DISCRIMINAION Dscrmnan-based Classfcaon 3 In classfcaon h K classes (C,C,, C k ) We defned dscrmnan funcon g j (), j=,,,k hen gven an es eample, e chose (predced) s class label as C f g
More informationClustering (Bishop ch 9)
Cluserng (Bshop ch 9) Reference: Daa Mnng by Margare Dunham (a slde source) 1 Cluserng Cluserng s unsupervsed learnng, here are no class labels Wan o fnd groups of smlar nsances Ofen use a dsance measure
More informationVariants of Pegasos. December 11, 2009
Inroducon Varans of Pegasos SooWoong Ryu bshboy@sanford.edu December, 009 Youngsoo Cho yc344@sanford.edu Developng a new SVM algorhm s ongong research opc. Among many exng SVM algorhms, we wll focus on
More informationLecture VI Regression
Lecure VI Regresson (Lnear Mehods for Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure VI: MLSC - Dr. Sehu Vjayakumar Lnear Regresson Model M
More informationSolution in semi infinite diffusion couples (error function analysis)
Soluon n sem nfne dffuson couples (error funcon analyss) Le us consder now he sem nfne dffuson couple of wo blocks wh concenraon of and I means ha, n a A- bnary sysem, s bondng beween wo blocks made of
More informationComputing Relevance, Similarity: The Vector Space Model
Compung Relevance, Smlary: The Vecor Space Model Based on Larson and Hears s sldes a UC-Bereley hp://.sms.bereley.edu/courses/s0/f00/ aabase Managemen Sysems, R. Ramarshnan ocumen Vecors v ocumens are
More informationIntroduction to Boosting
Inroducon o Boosng Cynha Rudn PACM, Prnceon Unversy Advsors Ingrd Daubeches and Rober Schapre Say you have a daabase of news arcles, +, +, -, -, +, +, -, -, +, +, -, -, +, +, -, + where arcles are labeled
More informationOutline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model
Probablsc Model for Tme-seres Daa: Hdden Markov Model Hrosh Mamsuka Bonformacs Cener Kyoo Unversy Oulne Three Problems for probablsc models n machne learnng. Compung lkelhood 2. Learnng 3. Parsng (predcon
More information( ) () we define the interaction representation by the unitary transformation () = ()
Hgher Order Perurbaon Theory Mchael Fowler 3/7/6 The neracon Represenaon Recall ha n he frs par of hs course sequence, we dscussed he chrödnger and Hesenberg represenaons of quanum mechancs here n he chrödnger
More informationLecture 6: Learning for Control (Generalised Linear Regression)
Lecure 6: Learnng for Conrol (Generalsed Lnear Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure 6: RLSC - Prof. Sehu Vjayakumar Lnear Regresson
More informationLearning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015
/4/ Learnng Objecves Self Organzaon Map Learnng whou Exaples. Inroducon. MAXNET 3. Cluserng 4. Feaure Map. Self-organzng Feaure Map 6. Concluson 38 Inroducon. Learnng whou exaples. Daa are npu o he syse
More informationCubic Bezier Homotopy Function for Solving Exponential Equations
Penerb Journal of Advanced Research n Compung and Applcaons ISSN (onlne: 46-97 Vol. 4, No.. Pages -8, 6 omoopy Funcon for Solvng Eponenal Equaons S. S. Raml *,,. Mohamad Nor,a, N. S. Saharzan,b and M.
More informationLinear Response Theory: The connection between QFT and experiments
Phys540.nb 39 3 Lnear Response Theory: The connecon beween QFT and expermens 3.1. Basc conceps and deas Q: ow do we measure he conducvy of a meal? A: we frs nroduce a weak elecrc feld E, and hen measure
More informationChapter 6: AC Circuits
Chaper 6: AC Crcus Chaper 6: Oulne Phasors and he AC Seady Sae AC Crcus A sable, lnear crcu operang n he seady sae wh snusodal excaon (.e., snusodal seady sae. Complee response forced response naural response.
More informationRobustness Experiments with Two Variance Components
Naonal Insue of Sandards and Technology (NIST) Informaon Technology Laboraory (ITL) Sascal Engneerng Dvson (SED) Robusness Expermens wh Two Varance Componens by Ana Ivelsse Avlés avles@ns.gov Conference
More informationBayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance
INF 43 3.. Repeon Anne Solberg (anne@f.uo.no Bayes rule for a classfcaon problem Suppose we have J, =,...J classes. s he class label for a pxel, and x s he observed feaure vecor. We can use Bayes rule
More informationOn One Analytic Method of. Constructing Program Controls
Appled Mahemacal Scences, Vol. 9, 05, no. 8, 409-407 HIKARI Ld, www.m-hkar.com hp://dx.do.org/0.988/ams.05.54349 On One Analyc Mehod of Consrucng Program Conrols A. N. Kvko, S. V. Chsyakov and Yu. E. Balyna
More informationGENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim
Korean J. Mah. 19 (2011), No. 3, pp. 263 272 GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS Youngwoo Ahn and Kae Km Absrac. In he paper [1], an explc correspondence beween ceran
More informationTHE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS
THE PREICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS INTROUCTION The wo dmensonal paral dfferenal equaons of second order can be used for he smulaon of compeve envronmen n busness The arcle presens he
More informationAdvanced Machine Learning & Perception
Advanced Machne Learnng & Percepon Insrucor: Tony Jebara SVM Feaure & Kernel Selecon SVM Eensons Feaure Selecon (Flerng and Wrappng) SVM Feaure Selecon SVM Kernel Selecon SVM Eensons Classfcaon Feaure/Kernel
More informationIn the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!
ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL The frs hng o es n wo-way ANOVA: Is here neracon? "No neracon" means: The man effecs model would f. Ths n urn means: In he neracon plo (wh A on he horzonal
More informationOrdinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s
Ordnary Dfferenal Equaons n Neuroscence wh Malab eamples. Am - Gan undersandng of how o se up and solve ODE s Am Undersand how o se up an solve a smple eample of he Hebb rule n D Our goal a end of class
More informationJ i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes.
umercal negraon of he dffuson equaon (I) Fne dfference mehod. Spaal screaon. Inernal nodes. R L V For hermal conducon le s dscree he spaal doman no small fne spans, =,,: Balance of parcles for an nernal
More informationComb Filters. Comb Filters
The smple flers dscussed so far are characered eher by a sngle passband and/or a sngle sopband There are applcaons where flers wh mulple passbands and sopbands are requred Thecomb fler s an example of
More informationFTCS Solution to the Heat Equation
FTCS Soluon o he Hea Equaon ME 448/548 Noes Gerald Reckenwald Porland Sae Unversy Deparmen of Mechancal Engneerng gerry@pdxedu ME 448/548: FTCS Soluon o he Hea Equaon Overvew Use he forward fne d erence
More informationDepartment of Economics University of Toronto
Deparmen of Economcs Unversy of Torono ECO408F M.A. Economercs Lecure Noes on Heeroskedascy Heeroskedascy o Ths lecure nvolves lookng a modfcaons we need o make o deal wh he regresson model when some of
More informationMachine Learning Linear Regression
Machne Learnng Lnear Regresson Lesson 3 Lnear Regresson Bascs of Regresson Leas Squares esmaon Polynomal Regresson Bass funcons Regresson model Regularzed Regresson Sascal Regresson Mamum Lkelhood (ML)
More informationLecture 11 SVM cont
Lecure SVM con. 0 008 Wha we have done so far We have esalshed ha we wan o fnd a lnear decson oundary whose margn s he larges We know how o measure he margn of a lnear decson oundary Tha s: he mnmum geomerc
More informationCS 536: Machine Learning. Nonparametric Density Estimation Unsupervised Learning - Clustering
CS 536: Machne Learnng Nonparamerc Densy Esmaon Unsupervsed Learnng - Cluserng Fall 2005 Ahmed Elgammal Dep of Compuer Scence Rugers Unversy CS 536 Densy Esmaon - Cluserng - 1 Oulnes Densy esmaon Nonparamerc
More informationRobust and Accurate Cancer Classification with Gene Expression Profiling
Robus and Accurae Cancer Classfcaon wh Gene Expresson Proflng (Compuaonal ysems Bology, 2005) Auhor: Hafeng L, Keshu Zhang, ao Jang Oulne Background LDA (lnear dscrmnan analyss) and small sample sze problem
More information[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5
TPG460 Reservor Smulaon 08 page of 5 DISCRETIZATIO OF THE FOW EQUATIOS As we already have seen, fne dfference appromaons of he paral dervaves appearng n he flow equaons may be obaned from Taylor seres
More informationThe Role of Random Spikes and Concurrent Input Layers in Spiking Neural Networks
The Role of Random Spkes and Concurren Inpu Layers n Spkng Neural Neworks Jans Zuers Faculy of Compung, Unversy of Lava, Rana bulvars 19, LV-1586 Rga, Lava ans.zuers@lu.lv Absrac. In spkng neural neworks
More informationJohn Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany
Herarchcal Markov Normal Mxure models wh Applcaons o Fnancal Asse Reurns Appendx: Proofs of Theorems and Condonal Poseror Dsrbuons John Geweke a and Gann Amsano b a Deparmens of Economcs and Sascs, Unversy
More informationVolatility Interpolation
Volaly Inerpolaon Prelmnary Verson March 00 Jesper Andreasen and Bran Huge Danse Mares, Copenhagen wan.daddy@danseban.com brno@danseban.com Elecronc copy avalable a: hp://ssrn.com/absrac=69497 Inro Local
More informationHow about the more general "linear" scalar functions of scalars (i.e., a 1st degree polynomial of the following form with a constant term )?
lmcd Lnear ransformaon of a vecor he deas presened here are que general hey go beyond he radonal mar-vecor ype seen n lnear algebra Furhermore, hey do no deal wh bass and are equally vald for any se of
More informationFall 2010 Graduate Course on Dynamic Learning
Fall 200 Graduae Course on Dynamc Learnng Chaper 4: Parcle Flers Sepember 27, 200 Byoung-Tak Zhang School of Compuer Scence and Engneerng & Cognve Scence and Bran Scence Programs Seoul aonal Unversy hp://b.snu.ac.kr/~bzhang/
More informationLecture 2 L n i e n a e r a M od o e d l e s
Lecure Lnear Models Las lecure You have learned abou ha s machne learnng Supervsed learnng Unsupervsed learnng Renforcemen learnng You have seen an eample learnng problem and he general process ha one
More informationWiH Wei He
Sysem Idenfcaon of onlnear Sae-Space Space Baery odels WH We He wehe@calce.umd.edu Advsor: Dr. Chaochao Chen Deparmen of echancal Engneerng Unversy of aryland, College Par 1 Unversy of aryland Bacground
More informationLi An-Ping. Beijing , P.R.China
A New Type of Cpher: DICING_csb L An-Png Bejng 100085, P.R.Chna apl0001@sna.com Absrac: In hs paper, we wll propose a new ype of cpher named DICING_csb, whch s derved from our prevous sream cpher DICING.
More informationDynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005
Dynamc Team Decson Theory EECS 558 Proec Shruvandana Sharma and Davd Shuman December 0, 005 Oulne Inroducon o Team Decson Theory Decomposon of he Dynamc Team Decson Problem Equvalence of Sac and Dynamc
More informationHEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD
Journal of Appled Mahemacs and Compuaonal Mechancs 3, (), 45-5 HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD Sansław Kukla, Urszula Sedlecka Insue of Mahemacs,
More informationMath 128b Project. Jude Yuen
Mah 8b Proec Jude Yuen . Inroducon Le { Z } be a sequence of observed ndependen vecor varables. If he elemens of Z have a on normal dsrbuon hen { Z } has a mean vecor Z and a varancecovarance marx z. Geomercally
More informationNew M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study)
Inernaonal Mahemacal Forum, Vol. 8, 3, no., 7 - HIKARI Ld, www.m-hkar.com hp://dx.do.org/.988/mf.3.3488 New M-Esmaor Objecve Funcon n Smulaneous Equaons Model (A Comparave Sudy) Ahmed H. Youssef Professor
More informationIncluding the ordinary differential of distance with time as velocity makes a system of ordinary differential equations.
Soluons o Ordnary Derenal Equaons An ordnary derenal equaon has only one ndependen varable. A sysem o ordnary derenal equaons consss o several derenal equaons each wh he same ndependen varable. An eample
More informationCELLULAR AUTOMATA BASED PATH-PLANNING ALGORITHM FOR AUTONOMOUS MOBILE ROBOTS. Rami Al-Hmouz, Tauseef Gulrez & Adel Al-Jumaily
CELLULAR AUTOMATA BASED PATH-PLANNING ALGORITHM FOR AUTONOMOUS MOBILE ROBOTS Ram Al-Hmouz, Tauseef Gulrez & Adel Al-Jumaly Informaon and Communcaons Group ARC Cenre of Ecellence n Auonomous Sysems Unversy
More informationNew conditioning model for robots
ESANN 011 proceedngs, European Symposum on Arfcal Neural Newors, Compuaonal Inellgence and Machne Learnng. Bruges (Belgum), 7-9 Aprl 011, 6doc.com publ., ISBN 978--87419-044-5. Avalable from hp://www.6doc.com/en/lvre/?gcoi=8001100817300.
More informatione-journal Reliability: Theory& Applications No 2 (Vol.2) Vyacheslav Abramov
June 7 e-ournal Relably: Theory& Applcaons No (Vol. CONFIDENCE INTERVALS ASSOCIATED WITH PERFORMANCE ANALYSIS OF SYMMETRIC LARGE CLOSED CLIENT/SERVER COMPUTER NETWORKS Absrac Vyacheslav Abramov School
More informationMechanics Physics 151
Mechancs Physcs 5 Lecure 0 Canoncal Transformaons (Chaper 9) Wha We Dd Las Tme Hamlon s Prncple n he Hamlonan formalsm Dervaon was smple δi δ Addonal end-pon consrans pq H( q, p, ) d 0 δ q ( ) δq ( ) δ
More informationEcon107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6)
Econ7 Appled Economercs Topc 5: Specfcaon: Choosng Independen Varables (Sudenmund, Chaper 6 Specfcaon errors ha we wll deal wh: wrong ndependen varable; wrong funconal form. Ths lecure deals wh wrong ndependen
More informationThis document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.
Ths documen s downloaded from DR-NTU, Nanyang Technologcal Unversy Lbrary, Sngapore. Tle A smplfed verb machng algorhm for word paron n vsual speech processng( Acceped verson ) Auhor(s) Foo, Say We; Yong,
More informationAnomaly Detection. Lecture Notes for Chapter 9. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar
Anomaly eecon Lecure Noes for Chaper 9 Inroducon o aa Mnng, 2 nd Edon by Tan, Senbach, Karpane, Kumar 2/14/18 Inroducon o aa Mnng, 2nd Edon 1 Anomaly/Ouler eecon Wha are anomales/oulers? The se of daa
More informationEEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment
EEL 6266 Power Sysem Operaon and Conrol Chaper 5 Un Commmen Dynamc programmng chef advanage over enumeraon schemes s he reducon n he dmensonaly of he problem n a src prory order scheme, here are only N
More informationSingle-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method
10 h US Naonal Congress on Compuaonal Mechancs Columbus, Oho 16-19, 2009 Sngle-loop Sysem Relably-Based Desgn & Topology Opmzaon (SRBDO/SRBTO): A Marx-based Sysem Relably (MSR) Mehod Tam Nguyen, Junho
More informationThe Analysis of the Thickness-predictive Model Based on the SVM Xiu-ming Zhao1,a,Yan Wang2,band Zhimin Bi3,c
h Naonal Conference on Elecrcal, Elecroncs and Compuer Engneerng (NCEECE The Analyss of he Thcknesspredcve Model Based on he SVM Xumng Zhao,a,Yan Wang,band Zhmn B,c School of Conrol Scence and Engneerng,
More informationCS286.2 Lecture 14: Quantum de Finetti Theorems II
CS286.2 Lecure 14: Quanum de Fne Theorems II Scrbe: Mara Okounkova 1 Saemen of he heorem Recall he las saemen of he quanum de Fne heorem from he prevous lecure. Theorem 1 Quanum de Fne). Le ρ Dens C 2
More information12d Model. Civil and Surveying Software. Drainage Analysis Module Detention/Retention Basins. Owen Thornton BE (Mech), 12d Model Programmer
d Model Cvl and Surveyng Soware Dranage Analyss Module Deenon/Reenon Basns Owen Thornon BE (Mech), d Model Programmer owen.hornon@d.com 4 January 007 Revsed: 04 Aprl 007 9 February 008 (8Cp) Ths documen
More informationCS 268: Packet Scheduling
Pace Schedulng Decde when and wha pace o send on oupu ln - Usually mplemened a oupu nerface CS 68: Pace Schedulng flow Ion Soca March 9, 004 Classfer flow flow n Buffer managemen Scheduler soca@cs.bereley.edu
More informationHidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University
Hdden Markov Models Followng a lecure by Andrew W. Moore Carnege Mellon Unversy www.cs.cmu.edu/~awm/uorals A Markov Sysem Has N saes, called s, s 2.. s N s 2 There are dscree meseps, 0,, s s 3 N 3 0 Hdden
More informationMechanics Physics 151
Mechancs Physcs 5 Lecure 9 Hamlonan Equaons of Moon (Chaper 8) Wha We Dd Las Tme Consruced Hamlonan formalsm H ( q, p, ) = q p L( q, q, ) H p = q H q = p H = L Equvalen o Lagrangan formalsm Smpler, bu
More informationNotes on the stability of dynamic systems and the use of Eigen Values.
Noes on he sabl of dnamc ssems and he use of Egen Values. Source: Macro II course noes, Dr. Davd Bessler s Tme Seres course noes, zarads (999) Ineremporal Macroeconomcs chaper 4 & Techncal ppend, and Hamlon
More information10. A.C CIRCUITS. Theoretically current grows to maximum value after infinite time. But practically it grows to maximum after 5τ. Decay of current :
. A. IUITS Synopss : GOWTH OF UNT IN IUIT : d. When swch S s closed a =; = d. A me, curren = e 3. The consan / has dmensons of me and s called he nducve me consan ( τ ) of he crcu. 4. = τ; =.63, n one
More informationBoosted LMS-based Piecewise Linear Adaptive Filters
016 4h European Sgnal Processng Conference EUSIPCO) Boosed LMS-based Pecewse Lnear Adapve Flers Darush Kar and Iman Marvan Deparmen of Elecrcal and Elecroncs Engneerng Blken Unversy, Ankara, Turkey {kar,
More informationMechanics Physics 151
Mechancs Physcs 5 Lecure 9 Hamlonan Equaons of Moon (Chaper 8) Wha We Dd Las Tme Consruced Hamlonan formalsm Hqp (,,) = qp Lqq (,,) H p = q H q = p H L = Equvalen o Lagrangan formalsm Smpler, bu wce as
More informationAppendix H: Rarefaction and extrapolation of Hill numbers for incidence data
Anne Chao Ncholas J Goell C seh lzabeh L ander K Ma Rober K Colwell and Aaron M llson 03 Rarefacon and erapolaon wh ll numbers: a framewor for samplng and esmaon n speces dversy sudes cology Monographs
More informationCHAPTER 2: Supervised Learning
HATER 2: Supervsed Learnng Learnng a lass from Eamples lass of a famly car redcon: Is car a famly car? Knowledge eracon: Wha do people epec from a famly car? Oupu: osve (+) and negave ( ) eamples Inpu
More informationMANY real-world applications (e.g. production
Barebones Parcle Swarm for Ineger Programmng Problems Mahamed G. H. Omran, Andres Engelbrech and Ayed Salman Absrac The performance of wo recen varans of Parcle Swarm Opmzaon (PSO) when appled o Ineger
More informationTSS = SST + SSE An orthogonal partition of the total SS
ANOVA: Topc 4. Orhogonal conrass [ST&D p. 183] H 0 : µ 1 = µ =... = µ H 1 : The mean of a leas one reamen group s dfferen To es hs hypohess, a basc ANOVA allocaes he varaon among reamen means (SST) equally
More informationCH.3. COMPATIBILITY EQUATIONS. Continuum Mechanics Course (MMC) - ETSECCPB - UPC
CH.3. COMPATIBILITY EQUATIONS Connuum Mechancs Course (MMC) - ETSECCPB - UPC Overvew Compably Condons Compably Equaons of a Poenal Vecor Feld Compably Condons for Infnesmal Srans Inegraon of he Infnesmal
More informationAlgorithm Research on Moving Object Detection of Surveillance Video Sequence *
Opcs and Phooncs Journal 03 3 308-3 do:0.436/opj.03.3b07 Publshed Onlne June 03 (hp://www.scrp.org/journal/opj) Algorhm Research on Movng Objec Deecon of Survellance Vdeo Sequence * Kuhe Yang Zhmng Ca
More information( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model
BGC1: Survval and even hsory analyss Oslo, March-May 212 Monday May 7h and Tuesday May 8h The addve regresson model Ørnulf Borgan Deparmen of Mahemacs Unversy of Oslo Oulne of program: Recapulaon Counng
More informationOn computing differential transform of nonlinear non-autonomous functions and its applications
On compung dfferenal ransform of nonlnear non-auonomous funcons and s applcaons Essam. R. El-Zahar, and Abdelhalm Ebad Deparmen of Mahemacs, Faculy of Scences and Humanes, Prnce Saam Bn Abdulazz Unversy,
More informationIntroduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms
Course organzaon Inroducon Wee -2) Course nroducon A bref nroducon o molecular bology A bref nroducon o sequence comparson Par I: Algorhms for Sequence Analyss Wee 3-8) Chaper -3, Models and heores» Probably
More informationFI 3103 Quantum Physics
/9/4 FI 33 Quanum Physcs Aleander A. Iskandar Physcs of Magnesm and Phooncs Research Grou Insu Teknolog Bandung Basc Conces n Quanum Physcs Probably and Eecaon Value Hesenberg Uncerany Prncle Wave Funcon
More informationPolymerization Technology Laboratory Course
Prakkum Polymer Scence/Polymersaonsechnk Versuch Resdence Tme Dsrbuon Polymerzaon Technology Laboraory Course Resdence Tme Dsrbuon of Chemcal Reacors If molecules or elemens of a flud are akng dfferen
More informationGenetic Algorithm in Parameter Estimation of Nonlinear Dynamic Systems
Genec Algorhm n Parameer Esmaon of Nonlnear Dynamc Sysems E. Paeraks manos@egnaa.ee.auh.gr V. Perds perds@vergna.eng.auh.gr Ah. ehagas kehagas@egnaa.ee.auh.gr hp://skron.conrol.ee.auh.gr/kehagas/ndex.hm
More informationAn Effective TCM-KNN Scheme for High-Speed Network Anomaly Detection
Vol. 24, November,, 200 An Effecve TCM-KNN Scheme for Hgh-Speed Nework Anomaly eecon Yang L Chnese Academy of Scences, Bejng Chna, 00080 lyang@sofware.c.ac.cn Absrac. Nework anomaly deecon has been a ho
More informationEfficient Asynchronous Channel Hopping Design for Cognitive Radio Networks
Effcen Asynchronous Channel Hoppng Desgn for Cognve Rado Neworks Chh-Mn Chao, Chen-Yu Hsu, and Yun-ng Lng Absrac In a cognve rado nework (CRN), a necessary condon for nodes o communcae wh each oher s ha
More informationAnisotropic Behaviors and Its Application on Sheet Metal Stamping Processes
Ansoropc Behavors and Is Applcaon on Shee Meal Sampng Processes Welong Hu ETA-Engneerng Technology Assocaes, Inc. 33 E. Maple oad, Sue 00 Troy, MI 48083 USA 48-79-300 whu@ea.com Jeanne He ETA-Engneerng
More informationAttribute Reduction Algorithm Based on Discernibility Matrix with Algebraic Method GAO Jing1,a, Ma Hui1, Han Zhidong2,b
Inernaonal Indusral Informacs and Compuer Engneerng Conference (IIICEC 05) Arbue educon Algorhm Based on Dscernbly Marx wh Algebrac Mehod GAO Jng,a, Ma Hu, Han Zhdong,b Informaon School, Capal Unversy
More informationRelative controllability of nonlinear systems with delays in control
Relave conrollably o nonlnear sysems wh delays n conrol Jerzy Klamka Insue o Conrol Engneerng, Slesan Techncal Unversy, 44- Glwce, Poland. phone/ax : 48 32 37227, {jklamka}@a.polsl.glwce.pl Keywor: Conrollably.
More informationIterative Learning Control and Applications in Rehabilitation
Ierave Learnng Conrol and Applcaons n Rehablaon Yng Tan The Deparmen of Elecrcal and Elecronc Engneerng School of Engneerng The Unversy of Melbourne Oulne 1. A bref nroducon of he Unversy of Melbourne
More informationM. Y. Adamu Mathematical Sciences Programme, AbubakarTafawaBalewa University, Bauchi, Nigeria
IOSR Journal of Mahemacs (IOSR-JM e-issn: 78-578, p-issn: 9-765X. Volume 0, Issue 4 Ver. IV (Jul-Aug. 04, PP 40-44 Mulple SolonSoluons for a (+-dmensonalhroa-sasuma shallow waer wave equaon UsngPanlevé-Bӓclund
More informationTesting a new idea to solve the P = NP problem with mathematical induction
Tesng a new dea o solve he P = NP problem wh mahemacal nducon Bacground P and NP are wo classes (ses) of languages n Compuer Scence An open problem s wheher P = NP Ths paper ess a new dea o compare he
More informationOnline Supplement for Dynamic Multi-Technology. Production-Inventory Problem with Emissions Trading
Onlne Supplemen for Dynamc Mul-Technology Producon-Invenory Problem wh Emssons Tradng by We Zhang Zhongsheng Hua Yu Xa and Baofeng Huo Proof of Lemma For any ( qr ) Θ s easy o verfy ha he lnear programmng
More informationA Novel Efficient Stopping Criterion for BICM-ID System
A Novel Effcen Soppng Creron for BICM-ID Sysem Xao Yng, L Janpng Communcaon Unversy of Chna Absrac Ths paper devses a novel effcen soppng creron for b-nerleaved coded modulaon wh erave decodng (BICM-ID)
More information5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015)
5h Inernaonal onference on Advanced Desgn and Manufacurng Engneerng (IADME 5 The Falure Rae Expermenal Sudy of Specal N Machne Tool hunshan He, a, *, La Pan,b and Bng Hu 3,c,,3 ollege of Mechancal and
More information. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue.
Lnear Algebra Lecure # Noes We connue wh he dscusson of egenvalues, egenvecors, and dagonalzably of marces We wan o know, n parcular wha condons wll assure ha a marx can be dagonalzed and wha he obsrucons
More information( ) [ ] MAP Decision Rule
Announcemens Bayes Decson Theory wh Normal Dsrbuons HW0 due oday HW o be assgned soon Proec descrpon posed Bomercs CSE 90 Lecure 4 CSE90, Sprng 04 CSE90, Sprng 04 Key Probables 4 ω class label X feaure
More informationComputational results on new staff scheduling benchmark instances
TECHNICAL REPORT Compuaonal resuls on new saff shedulng enhmark nsanes Tm Curos Rong Qu ASAP Researh Group Shool of Compuer Sene Unersy of Nongham NG8 1BB Nongham UK Frs pulshed onlne: 19-Sep-2014 las
More informationAn Integrated and Interactive Video Retrieval Framework with Hierarchical Learning Models and Semantic Clustering Strategy
An Inegraed and Ineracve Vdeo Rereval Framewor wh Herarchcal Learnng Models and Semanc Cluserng Sraegy Na Zhao, Shu-Chng Chen, Me-Lng Shyu 2, Suar H. Rubn 3 Dsrbued Mulmeda Informaon Sysem Laboraory School
More informationGoal Seeking of Mobile Robot Using Fuzzy Actor Critic Learning Algorithm
Goal Seekng of Moble Robo Usng Fuzzy Acor Crc Learnng Algorhm F. Lachekhab, M. Tadjne Absrac In hs paper, we presen a sudy of a basc behavor of moble robo, whch s goal seekng. Frsly, we use he heurscs
More informationA Tour of Modeling Techniques
A Tour of Modelng Technques John Hooker Carnege Mellon Unversy EWO Semnar February 8 Slde Oulne Med neger lnear (MILP) modelng Dsuncve modelng Knapsack modelng Consran programmng models Inegraed Models
More informationP R = P 0. The system is shown on the next figure:
TPG460 Reservor Smulaon 08 page of INTRODUCTION TO RESERVOIR SIMULATION Analycal and numercal soluons of smple one-dmensonal, one-phase flow equaons As an nroducon o reservor smulaon, we wll revew he smples
More informationGraduate Macroeconomics 2 Problem set 5. - Solutions
Graduae Macroeconomcs 2 Problem se. - Soluons Queson 1 To answer hs queson we need he frms frs order condons and he equaon ha deermnes he number of frms n equlbrum. The frms frs order condons are: F K
More informationNeural Networks-Based Time Series Prediction Using Long and Short Term Dependence in the Learning Process
Neural Neworks-Based Tme Seres Predcon Usng Long and Shor Term Dependence n he Learnng Process J. Puchea, D. Paño and B. Kuchen, Absrac In hs work a feedforward neural neworksbased nonlnear auoregresson
More informationOn the Boyd- Kuramoto Model : Emergence in a Mathematical Model for Adversarial C2 Systems
On he oyd- Kuramoo Model : Emergence n a Mahemacal Model for Adversaral C2 Sysems Alexander Kallonas DSTO, Jon Operaons Dvson C2 Processes: many are cycles! oyd s Observe-Oren-Decde-Ac Loop: Snowden s
More information