AT&T Labs Research, Shannon Laboratory, 180 Park Avenue, Room A279, Florham Park, NJ , USA

Size: px
Start display at page:

Download "AT&T Labs Research, Shannon Laboratory, 180 Park Avenue, Room A279, Florham Park, NJ , USA"

Transcription

1 Machne Learnng, 43, 65 91, 001 c 001 Kluwer Acadec Publshers. Manufacured n The Neherlands. Drfng Gaes ROBERT E. SCHAPIRE schapre@research.a.co AT&T Labs Research, Shannon Laboraory, 180 Park Avenue, Roo A79, Florha Park, NJ , USA Edor: Yora Snger Absrac. We nroduce and sudy a general, absrac gae played beween wo players called he shepherd and he adversary. The gae s played n a seres of rounds usng a fne se of chps whch are oved abou n R n. On each round, he shepherd assgns a desred drecon of oveen and an porance wegh o each of he chps. The adversary hen oves he chps n any way ha need only be weakly correlaed wh he desred drecons assgned by he shepherd. The shepherd s goal s o cause he chps o be oved o low-loss posons, where he loss of each chp a s fnal poson s easured by a gven loss funcon. We presen a shepherd algorh for hs gae and prove an upper bound on s perforance. We also prove a lower bound showng ha he algorh s essenally opal for a large nuber of chps. We dscuss copuaonal ehods for effcenly pleenng our algorh. We show ha our general drfng-gae algorh subsues soe well suded boosng and on-lne learnng algorhs whose analyses follow as easy corollares of our general resul. Keywords: boosng, on-lne learnng algorhs 1. Inroducon We nroduce a general, absrac gae played beween wo players called he shepherd 1 and he adversary. The gae s played n a seres of rounds usng a fne se of chps whch are oved abou n R n. On each round, he shepherd assgns a desred drecon of oveen o each of he chps, as well as a nonnegave wegh easurng he relave porance ha each chp be oved n he desred drecon. In response, he adversary oves each chp however wshes, so long as he relave oveens of he chps proeced n he drecons chosen by he shepherd are a leas δ, on average. Here, he average s aken wh respec o he porance weghs ha were seleced by he shepherd, and δ 0 s a gven paraeer of he gae. Snce we hnk of δ as a sall nuber, he adversary need ove he chps n a fashon ha s only weakly correlaed wh he drecons desred by he shepherd. The adversary s also resrced o choose relave oveens for he chps fro a gven se B R n. The goal of he shepherd s o force he chps o be oved o low-loss posons, where he loss of each chp a s fnal poson s easured by a gven loss funcon L. A ore foral descrpon of he gae s gven n Secon. We presen n Secon 4 a new algorh called OS for playng hs gae n he role of he shepherd, and we analyze he algorh s perforance for any paraeerzaon of he gae eeng ceran naural condons. Under he sae condons, we also prove n Secon 5 ha our algorh s he bes possble when he nuber of chps becoes large.

2 66 R. E. SCHAPIRE As spelled ou n Secon 3, he drfng gae s closely relaed o boosng, he proble of fndng a hghly accurae classfcaon rule by cobnng any weak classfers or hypoheses. The drfng gae and s analyss are generalzaons of Freund s (1995) aory-voe gae whch was used o derve hs boos-by-aory algorh. Ths laer algorh s opal n a ceran sense for boosng bnary probles usng weak hypoheses whch are resrced o akng bnary predcons. However, he boos-byaory algorh has never been generalzed o ulclass probles, nor o a seng n whch weak hypoheses ay absan or gve graded predcons beween wo classes. The general drfng gae ha we sudy leads edaely o new boosng algorhs for hese sengs. By our resul on he opaly of he OS algorh, hese new boosng algorhs are also bes possble, assung as we do n hs paper ha he fnal hypohess s resrced n for o a sple aory voe. We do no know f he derved algorhs are opal whou hs resrcon. In Secon 6, we dscuss copuaonal ehods for pleenng he OS algorh. We gve a useful heore for handlng gaes n whch he loss funcon enoys ceran onooncy properes. We also gve a ore general echnque usng lnear prograng for pleenng OS n any sengs, ncludng he drfng gae ha corresponds o ulclass boosng. In hs laer case, he algorh runs n polynoal e when he nuber of classes s held consan. In Secon 7, we dscuss he analyss of several drfng gaes correspondng o prevously suded learnng probles. For he drfng gaes correspondng o bnary boosng wh or whou absanng weak hypoheses, we show how o pleen he algorh effcenly. We also show ha here are paraeerzaons of he drfng gae under whch OS s equvalen o a splfed verson of he AdaBoos algorh (Freund & Schapre, 1997; Schapre & Snger, 1999), as well as Cesa-Banch e al. s (1996) BW algorh and Llesone and Waruh s (1994) weghed aory algorh for cobnng he advce of expers n an on-lne learnng seng. Analyses of hese algorhs follow as easy corollares of he analyss we gve for general drfng gaes.. Drfng gaes We begn wh a foral descrpon of he drfng gae. An oulne of he gae s shown n fgure 1. There are wo players n he gae called he shepherd and he adversary. The gae s played n T rounds usng chps. On each round, he shepherd specfes a wegh vecor w Rn for each chp. The drecon of hs vecor, v = w / w p, specfes a desred drecon of drf, whle he lengh of he vecor w p specfes he relave porance of ovng he chp n he desred drecon. In response, he adversary chooses a drf vecor z for each chp. The adversary s consraned o choose each z fro a fxed se B Rn. Moreover, he z s us sasfy w z δ w p (1)

3 DRIFTING GAMES 67 paraeers: for = 1,...,T: nuber of rounds T denson of space n se B R n of pered relave oveens nor l p where p 1 nu average drf δ 0 loss funcon L : R n R nuber of chps shepherd chooses wegh vecor w Rn for each chp adversary chooses drf vecor z B for each chp so ha w z δ w p =1 =1 ( ) T he fnal loss suffered by he shepherd s 1 L z =1 =1 Fgure 1. The drfng gae. or equvalenly w p v z δ () p w where δ 0 s a fxed paraeer of he gae. (Here and hroughou he paper, when clear fro conex, denoes = 1 ; lkewse, we wll shorly use he noaon for T = 1.) In words, v z s he aoun by whch chp has oved n he desred drecon. Thus, he lef hand sde of Eq. () represens a weghed average of he drfs of he chps proeced n he desred drecons where chp s proeced drf s weghed by w p/ w p.we requre ha hs average proeced drf be a leas δ. The poson of chp a e, denoed by s, s sply he su of he drfs of ha chp up o ha pon n e. Thus, s 1 = 0 and s+1 = s + z. The fnal poson of chp a he end of he gae s s T +1. A he end of T rounds, we easure he shepherd s perforance usng a funcon L of he fnal posons of he chps; hs funcon s called he loss funcon. Specfcally, he shepherd s goal s o nze 1 L ( s T +1 ). Suarzng, we see ha a gae s specfed by several paraeers: he nuber of rounds T ; he denson n of he space; a nor p on R n ; a se B R n ; a nu drf consan δ 0; a loss funcon L; and he nuber of chps. Snce he lengh of wegh vecors w are easured usng an l p -nor, s naural o easure drf vecors z usng a dual l q -nor where 1/p + 1/q = 1. When clear fro conex, we wll generally drop p and q subscrps and wre sply w or z.

4 68 R. E. SCHAPIRE As an exaple of a drfng gae, suppose ha he gae s played on he real lne and ha he shepherd s goal s o ge as any chps as possble no he nerval [, 7]. Suppose furher ha he adversary s consraned o ove each chp lef or rgh by one un, and ha, on each round, 10% of he chps (as weghed by he shepherd s chosen dsrbuon over chps) us be oved n he shepherd s desred drecon. Then for hs gae, n = 1, B ={ 1,+1}and δ = 0.1. Any nor wll do (snce we are workng n us one denson), and he loss funcon s { 0 f s [, 7] L(s) = 1 oherwse. We wll reurn o hs exaple laer n he paper. Drfng gaes bear a ceran reseblence o he knd of gaes suded n Blackwell s (1956) celebraed approachably heory. However, s unclear wha he exac relaonshp s beween hese wo ypes of gaes and wheher one ype s a specal case of he oher. 3. Relaon o boosng In hs secon, we descrbe how he general gae of drf relaes drecly o boosng. In he sples boosng odel, here s a boosng algorh ha has access o a weak learnng algorh ha calls n a seres of rounds. There are gven labeled exaples (x 1, y 1 ),...,(x,y )where x X and y { 1,+1}. On each round, he booser chooses a dsrbuon D () over he exaples. The weak learner hen us generae a weak hypohess h : X { 1,+1}whose error s a os 1/ γ wh respec o dsrbuon D. Tha s, Pr D [y h (x )] 1 γ. (3) Here, γ > 0 s known a pror o boh he booser and he weak learner. Afer T rounds, he booser oupus a fnal hypohess whch we here assue s a aory voe of he weak hypoheses: ( ) H(x) = sgn h (x). (4) For our purposes, he goal of he booser s o nze he fracon of errors of he fnal hypohess on he gven se of exaples: 1 { : y H(x )}. (5) We can recas boosng as us descrbed as a specal-case drfng gae; a slar gae, called he aory-voe gae, was suded by Freund (1995) for hs case. The chps are denfed wh exaples, and he gae s one-densonal so ha n = 1. The drf of a chp z s +1 f exaple s correcly classfed by h and 1 oherwse; ha s, z = y h (x )

5 DRIFTING GAMES 69 and B ={ 1,+1}. The wegh w s forally pered o be negave, soehng ha does no ake sense n he boosng seng; however, for he opal shepherd descrbed n he nex secon, hs wegh wll always be nonnegave for hs gae (by Theore 7), so we henceforh assue ha w 0. The dsrbuon D () corresponds o w / w. Then he condon n Eq. (3) s equvalen o or [ w w w z γ ( 1 z )] 1 γ w. (6) Ths s he sae as Eq. (1) f we le δ = γ. Fnally, f we defne he loss funcon o be hen { 1 fs 0 L(s) = 0 fs > 0 1 L ( s T +1 ) (7) (8) s exacly equal o Eq. (5). Our an resul on playng drfng gaes yelds n hs case exacly Freund s boos-byaory algorh (1995). There are nuerous varans of hs basc boosng seng o whch Freund s algorh has never been generalzed and analyzed. For nsance, we have so far requred weak hypoheses o oupu values n { 1, +1}. I s naural o generalze hs odel o allow weak hypoheses o ake values n { 1, 0, +1} so ha he weak hypoheses ay absan on soe exaples, or o ake values n [ 1, +1] so ha a whole range of values s possble. These correspond o sple odfcaons of he drfng gae descrbed above n whch we sply change B o { 1, 0, +1} or [ 1, +1]. As before, we requre ha Eq. (6) hold for all weak hypoheses and we aep o desgn a boosng algorh whch nzes Eq. (8). For boh of hese cases, we are able o derve analogs of he boos-by-aory algorh whch we prove are opal n a parcular sense. Anoher drecon for generalzaon s o he non-bnary ulclass case n whch labels y belong o a se Y ={1,...,n},n>. Followng generalzaons of he boosng algorh AdaBoos o he ulclass case (Freund & Schapre, 1997; Schapre & Snger, 1999), we allow he booser o assgn weghs boh o exaples and labels. Tha s, on each round, he booser devses a dsrbuon D (,l)over exaples and labels l Y. The weak learner hen copues a weak hypohess h : X Y { 1,+1}whch us be correc on a non-rval fracon of he exaple-label pars. Tha s, f we defne { +1 f y = l χ y (l) = 1 oherwse

6 70 R. E. SCHAPIRE hen we requre Pr (,l) D [h (x,l) χ y (l)] 1 γ. (9) The fnal hypohess, we assue, s agan a pluraly voe of he weak hypoheses: H(x) = arg ax y Y h (x, y). (10) We can cas hs ulclass boosng proble as a drfng gae as follows. We have n densons, one per class. I wll be convenen for he frs denson always o correspond o he correc label, wh he reanng n 1 densons correspondng o ncorrec labels. To do hs, le us defne a ap π l : R n R n whch sply swaps coordnaes 1 and l, leavng he oher coordnaes unouched. The wegh vecors w correspond o he dsrbuon D, odulo swappng of coordnaes, a correcon of sgn and noralzaon: [ ( )] π y w l D (,l)= w The nor used here o easure wegh vecors s l 1 -nor. Also, wll follow fro Theore 7 ha, for opal play of hs gae, he frs coordnae of w s always nonnegave and all oher coordnaes are nonposve. The drf vecors z are derved as before fro he weak hypoheses: z = π y ( h (x, 1),...,h (x,n) ). I can be verfed ha he condon n Eq. (9) s equvalen o Eq. (1) wh δ = γ. For bnary weak hypoheses, B ={ 1,+1} n. The fnal hypohess H akes a sake on exaple (x, y) f and only f h (x, y) ax l:l y h (x, l). Therefore, we can coun he fracon of sakes of he fnal hypohess n he drfng gae conex as where 1 L ( s T +1 ) { 1 fs1 ax{s,...,s n } L(s) = 0 oherwse. (11)

7 DRIFTING GAMES 71 Thus, by gvng an algorh for he general drfng gae, we also oban a generalzaon of he boos-by-aory algorh for ulclass probles. The algorh can be pleened n hs case n polynoal e for a consan nuber of classes n, and he algorh s provably bes possble n a parcular sense. We noe also ha a splfed for of he AdaBoos algorh (Freund & Schapre, 1997; Schapre & Snger, 1999) can be derved as an nsance of he OS algorh sply by changng he loss funcon L n Eq. (7) o an exponenal L(s) = exp( ηs) for soe η>0. More deals on hs gae are gven n Secon 7.. Besdes boosng probles, he drfng gae also generalzes he proble of learnng on-lne wh a se of expers (Cesa-Banch e al., 1997; Llesone & Waruh, 1994). In parcular, he BW algorh of Cesa-Banch e al. (1996) and he weghed aory algorh of Lesone and Waruh (1994) can be derved as specal cases of our an algorh for a parcular naural paraeerzaon of he drfng gae. Deals are gven n Secon The algorh and s analyss We nex descrbe our algorh for playng he general drfng gae of Secon. Lke Freund s boos-by-aory algorh (1995), he algorh we presen here uses a poenal funcon whch s cenral boh o he workngs of he algorh and s analyss. Ths funcon can be hough of as a guess of he loss ha we expec o suffer for a chp a a parcular poson and a a parcular pon n e. We denoe he poenal of a chp a poson s on round by φ (s). The fnal poenal s he acual loss so ha φ T = L. The poenal funcons φ for earler e seps are defned nducvely: φ 1 (s) = n sup w R n z B (φ (s + z) + w z δ w p ). (1) We wll show laer ha, under naural condons, he nu above acually exss. Moreover, he nzng vecor w s he one used by he shepherd for he algorh we now presen. We call our shepherd algorh OS for opal shepherd. The wegh vecor w chosen by OS for chp s any vecor w whch nzes sup z B ( φ ( s + z ) + w z δ w p ). Reurnng o he exaple a he end of Secon, fgure shows he poenal funcon φ and he weghs ha would be seleced by OS as a funcon of he poson of each chp for varous choces of. For hs fgure, T = 0. We wll need soe naural assupons o analyze hs algorh. The frs assupon saes erely ha he allowed drf vecors n B are bounded; for convenence, we assue hey have nor a os one. Assupon 1. sup z B z q 1. We nex assue ha he loss funcon L s bounded.

8 7 R. E. SCHAPIRE Fgure. Plos of he poenal funcon (op curve n each fgure) and he weghs seleced by OS (boo curves) as a funcon of he poson of a chp n he exaple gae a he end of Secon for varous choces of and wh T = 0. The vercal doed lnes show he boundary of he goal nerval [, 7]. Curves are only eanngful a neger values.

9 DRIFTING GAMES 73 Assupon. There exs fne L n and L ax such ha L n L(s) L ax for all s R n. In fac, hs assupon need only hold for all s wh s q T snce posons ousde hs range are never reached, gven Assupon 1. Fnally, we assue ha, for any drecon v, s possble o choose a drf whose proecon ono v s ore han δ by a consan aoun. Assupon 3. There exss a nuber µ>0such ha for all w R n here exss z B wh w z (δ + µ) w. Lea 1. Gven Assupons 1, and 3, for all = 0,...,T: 1. he nu n Eq. (1) exss; and. L n φ (s) L ax for all s R n. Proof: By backwards nducon on. The base cases are rval. Le us fx s and le F(z) = φ (s + z). Le H(w) = sup(f(z) + w z δ w ). z B Usng Assupon 1, for any w, w : H(w ) H(w) sup (F(z) + w z δ w ) (F(z) + w z δ w ) z B = sup (w w ) z + δ( w w ) z B (1+δ) w w. Therefore, H s connuous. Moreover, for w R n, by Assupons and 3 (as well as our nducve hypohess), Snce H(w) L n + (δ + µ) w δ w =L n + µ w. (13) H(0) L ax, (14) follows ha H(w) >H(0)f w >(L ax L n )/µ. Thus, for copung he nu of H,we only need consder pons n he copac se { w : w L } ax L n. µ Snce a connuous funcon over a copac se has a nu, hs proves Par 1. Par follows edaely fro Eqs. (13) and (14).

10 74 R. E. SCHAPIRE We nex prove an upper bound on he loss suffered by a shepherd eployng he OS algorh agans any adversary. Ths s he an resul of hs secon. We wll shorly see ha hs bound s essenally bes possble for any algorh. I s poran o noe ha hese heores ell us uch ore han he alos obvous pon ha he opal hng o do s whaever s bes n a nax sense. These heores prove he nonrval fac ha (nearly) nax behavor can be obaned whou he sulaneous consderaon of all of he chps a once. Raher, we can copue each wegh vecor w erely as a funcon of he poson of chp, whou consderaon of he posons of any of he oher chps. Theore. Under he condon of Assupons 1 3, he fnal loss suffered by he OS algorh agans any adversary s a os φ 0 (0) where he funcons φ are defned above. Proof: Followng Freund s analyss (1995), we show ha he oal poenal never ncreases. Tha s, we prove by nducon ha ( ) ( ) φ s +1 φ 1 s. (15) Ths ples, hrough repeaed applcaon of Eq. (15), ha 1 L ( s T +1 ) 1 = ( ) φ T s T +1 1 ( ) φ 0 s 1 = φ0 (0) as claed. The defnon of φ 1 gven n Eq. (1) ples ha for w chosen by he OS algorh, and for all z B and all s R n : φ (s + z) + w z δ w φ 1 (s). Therefore, ( ) ( φ s +1 = φ s + z ) ( φ 1 ( s ) w z + δ w ) φ 1 ( s ) where he las nequaly follows fro Eq. (1). Reurnng agan o he exaple a he end of Secon, fgure 3 shows a plo of he bound φ 0 (0) as a funcon of he oal nuber of rounds T. I s raher curous ha he bound s no onoonc n T (even dscounng he agged naure of he curve caused by he dfference beween even and odd lengh gaes). Apparenly, for hs gae, havng ore e o ge he chps no he goal regon can acually hur he shepherd.

11 DRIFTING GAMES 75 Fgure 3. A plo of he loss bound φ 0 (0) as a funcon of he oal nuber of rounds T for he exaple gae a he end of Secon. The agged naure of he curve s due o he dfference beween a gae wh an odd or an even nuber of seps. 5. A lower bound In hs secon, we prove ha he OS algorh s essenally opal n he sense ha, for any shepherd algorh, here exss an adversary capable of forcng a loss achng he upper bound of Theore 3 n he l of a large nuber of chps. Specfcally, we prove he followng heore, he an resul of hs secon: Theore. Le A be any shepherd algorh for playng a drfng gae sasfyng Assupons 1 3 where all paraeers of he gae are fxed, excep he nuber of chps. Le φ be as defned above. Then for any ɛ>0, here exss an adversary such ha for suffcenly large, he loss suffered by algorh A s a leas φ 0 (0) ɛ. To prove he heore, we wll need wo leas. The frs gves an absrac resul on copung a nax of he knd appearng n Eq. (1). The second lea uses he frs o prove a characerzaon of φ n a for aenable o use n he proof of Theore 3.

12 76 R. E. SCHAPIRE Lea 4. Le S be any nonepy, bounded subse of R. Le C be he convex hull of S. Then nf sup{y + αx : (x, y) S} =sup{y : (0, y) C}. α R Proof: Le C be he closure of C. Frs, for any α R, sup{y + αx : (x, y) S} =sup{y + αx : (x, y) C} = sup{y + αx : (x, y) C}. (16) The frs equaly follows fro he fac ha, f (x, y) C hen (x, y) = N p (x, y ) =1 for soe posve neger N, p [0, 1], p = 1, (x, y ) S. Bu hen y + αx = N =1 p (y + αx ) ax (y + αx ). The second equaly n Eq. (16) follows sply because he supreu of a connuous funcon on any se s equal o s supreu over he closure of he se. For hs sae reason, sup{y : (0, y) C} =sup{y : (0, y) C}. (17) Because C s closed, convex and bounded, and because he funcon y +αx s connuous, concave n (x, y) and convex n α, we can reverse he order of he nf sup (see, for nsance, Corollary of Rockafellar (1970)). Tha s, nf sup α R (x,y) C Clearly, f x 0 hen (y + αx) = sup nf (y + αx) =. α R (x,y) C α R Thus, he rgh hand sde of Eq. (18) s equal o sup{y : (0, y) C}. nf (y + αx). (18) Cobnng wh Eqs. (16) and (17) edaely gves he resul.

13 DRIFTING GAMES 77 Lea 5. Under he condon of Assupons 1 3, and for φ as defned above, N φ 1 (s) = nf sup d φ (s + z ) v : v =1 =1 where he supreu s aken over all posve negers N, all z 1,...,z N B and all nonnegave d 1,...,d N sasfyng d = 1 and d v z = δ. Proof: To splfy noaon, le us fx and s. Le F and H be as defned n he proof of Lea 1. For v =1, le G(v) = sup N d F(z ) (19) =1 where agan he supreu s aken over d s and z s as n he saeen of he lea. Noe ha by Assupon 3, hs supreu canno be vacuous. Throughou hs proof, we use v o denoe a vecor of nor one, whle w s a vecor of unresrced nor. Our goal s o show ha nf v G(v) = nf H(w). (0) Le us fx v oenarly. Le w S ={(v z δ, F(z)) : z B}. Then S s bounded by Assupons 1 3 (and par of Lea 1), so we can apply Lea 4 whch gves nf Noe ha sup α R z B (F(z) + α(v z δ)) = G(v). (1) nf H(αv) = nf α 0 nf sup α 0 z B sup α R z B sup α R z B nf (F(z) + αv z αδ) (F(z) + αv z αδ) (F(z) + αv z α δ) = nf H(αv) α R (where he second nequaly uses α α ). Cobnng wh Eq. (1) gves nf v nf α 0 H(αv) nf v G(v) nf nf H(αv). v α R

14 78 R. E. SCHAPIRE Snce he lef and rgh ers are boh equal o nf w H(w), hs ples Eq. (0) and coplees he proof. Proof of Theore 3: We wll show ha, for suffcenly large, on round, he adversary can choose he z s so ha 1 ( ) φ s +1 1 Repeaedly applyng Eq. () ples ha 1 L ( s T +1 ) 1 = φ 1 ( s ) ɛ T. () ( ) φ T s T +1 1 ( ) φ 0 s 1 ɛ = φ0 (0) ɛ provng he heore. Fx. We use a rando consrucon o show ha here exs z s wh he desred properes. For each wegh vecor w chosen by he shepherd, le d 1,...,d N [0, 1] and z 1,...,z N B be such ha d = 1, d w z = δ w and d φ ( s + z ) φ 1 ( s ) ɛ T. Such d s and z s us exs by Lea 5. Usng Assupon 3, le z 0 be such ha w z 0 (δ + µ) w. Fnally, le Z be a rando varable ha s z 0 wh probably α and z wh probably (1 α)d (ndependen of he oher Z s). Here, α = ɛ 4T (L ax L n ). Le v = w / w, and le a = w / w. By Assupon 1, v Z 1. Also, E[v Z ] (1 α)δ + α(δ + µ) = δ + αµ. Thus, by Hoeffdng s nequaly (1963), [ ] Pr a v Z <δ exp ( α µ ) e α µ /. (3) a

15 DRIFTING GAMES 79 Le S = (1/) φ (s + Z ). Then E[S] 1 = 1 1 [( ( ) φ 1 s ɛ ( ) )(1 ] α) + αφ s + z 0 T [ ( ) ( ( ) ( ))] φ 1 s + α φ s + z 0 φ 1 s ɛ (1 α) T φ 1 ( s ) α(lax L n ) ɛ T. (4) By Hoeffdng s nequaly (1963), snce L n φ (s + Z ) L ax, Pr[S < E[S] α(l ax L n )] e α. (5) Now le be so large ha e α + e α µ / < 1. Then by Eqs. (3) and (5), here exss a choce of z s such ha w z = a v z δ and such ha 1 ( ) φ s +1 1 = by Eq. (4) and our choce of α. ( φ s + z ) E[S] α(l ax L n ) 1 ( ) φ 1 s ɛ T 6. Copuaonal ehods In hs secon, we dscuss general copuaonal ehods for pleenng he OS algorh Unae loss funcons We frs noe ha, for loss funcons L wh ceran onooncy properes, he quadran n whch he nzng wegh vecors are o be found can be deerned a pror. Ths ofen splfes he search for na. To be ore precse, for σ { 1,+1} n and x, y R n,

16 80 R. E. SCHAPIRE le us wre x σ y f σ x σ y for all 1 n. We say ha a funcon f : R n R s unae wh sgn vecor σ { 1,+1} n f f (x) f (y) whenever x σ y. Lea 6. If he loss funcon L s unae wh sgn vecor σ { 1,+1} n,hen so s φ (as defned above) for = 0,...,T. Proof: By backwards nducon on. The base case s edae. Le x σ y. Then for any z B and w R n, x + z σ y + z, and so φ (x + z) + w z δ w φ (y+z)+w z δ w by nducve hypohess. Therefore, φ 1 (x) φ 1 (y), and so φ 1 s also unae. For he an heore of hs subsecon, we need one ore assupon: Assupon 4. If z B and f z s such ha z = z for all, hen z B. Theore 7. Under he condon of Assupons 1 4, f L s unae wh sgn vecor σ { 1,+1} n,hen for any s R n, here s a vecor w whch nzes sup(φ (s + z) + w z δ w ) z B and for whch w σ 0. Proof: Le F and H be as n he proof of Lea 1. By Lea 6, F s unae. Le w R n have soe coordnae for whch σ w > 0 so ha w σ 0. Le w be such ha { w w f = w f =. We show ha H(w ) H(w). Le z B. Ifσ z >0 hen F(z) + w z δ w F(z)+w z δ w. If σ z 0 hen le z be defned analogously o w. By Assupon 4, z B. Then z σ z and so F(z) F(z ). Thus, F(z ) + w z δ w F(z)+w z δ w. Hence, H(w ) H(w). Applyng hs arguen repeaedly, we can derve a vecor w wh w σ 0 and such ha H( w) H(w). Ths proves he heore. Noe ha he loss funcons for all of he gaes n Secon 3 are unae (and also sasfy Assupons 1 4). The sae wll be rue of all of he gaes dscussed n Secon 7. Thus,

17 DRIFTING GAMES 81 for all of hese gaes, we can deerne a pror he sgns of each of he coordnaes of he nzng vecors used by he OS algorh. 6.. A general echnque usng lnear prograng In any cases, we can use lnear prograng o pleen OS. In parcular, le us assue ha we easure wegh vecors w usng he l 1 nor (.e., p = 1). Also, le us assue ha B s fne. Then gven φ and s, copung φ 1 (s) = n w R n ax z B (φ (s + z) + w z δ w ) can be rewren as an opzaon proble: varables: nze: subec o: w R n, b R b z B : φ (s + z) + w z δ w b. The nzng value b s he desred value of φ 1 (s). Noe ha, wh respec o he varables w and b, hs proble s alos a lnear progra, f no for he nor operaor. However, when L s unae wh sgn vecor σ, and when he oher condons of Theore 7 hold, we can resrc w so ha w σ 0. Ths allows us o wre w 1 = n σ w. =1 Addng w σ 0 as a consran (or raher, a se of n consrans), we now have derved a lnear progra wh n + 1 varables and B + n consrans. Ths can be solved n polynoal e. Thus, for nsance, hs echnque can be appled o he ulclass boosng proble dscussed n Secon 3. In hs case, B ={ 1,+1} n. So, for any s, φ 1 (s) can be copued fro φ n e polynoal n n whch ay be reasonable for sall n. In addon, φ us be copued a each reachable poson s n an n-densonal neger grd of radus,.e., for all s {, +1,..., 1,} n. Ths nvolves copuaon of φ a ( + 1) n pons, gvng an overall runnng e for he algorh whch s polynoal n (T + 1) n. Agan, hs ay be reasonable for very sall n. I s an open proble o fnd a way o pleen he algorh ore effcenly. 7. Dervng old and new algorhs In hs secon, we show how a nuber of old and new boosng and on-lne learnng algorhs can be derved and analyzed as nsances of he OS algorh for appropraely chosen drfng gaes.

18 8 R. E. SCHAPIRE 7.1. Boos-by-aory and varans We begn wh he drfng gae descrbed n Secon 3 correspondng o bnary boosng wh B ={ 1,+1}. For hs gae, φ 1 (s) = n w 0 ax{φ (s 1) w δw, φ (s + 1) + w δw} where we know fro Theore 7 ha only nonnegave values of w need o be consdered. I can be argued ha he nu us occur when.e., when φ (s 1) w δw = φ (s + 1) + w δw, w = φ (s 1) φ (s + 1). (6) Ths gves φ 1 (s) = 1 + δ φ (s + 1) + 1 δ φ (s 1). Solvng gves φ (s) = T 0 k (T s)/ ( T k )( ) 1 + δ k 1 δ (where we follow he convenon ha ( n k ) = 0fk <0ork >n). Weghng exaples usng Eq. (6) gves exacly Freund s (1995) boos-by-aory algorh (he boosng by resaplng verson). When B ={ 1,0,+1}, a slar bu ore nvolved analyss gves { φ 1 (s) =ax (1 δ)φ (s) + δφ (s + 1), 1 + δ φ (s + 1) + 1 δ } φ (s 1) and he correspondng choce of w s φ (s) φ (s + 1) or (φ (s 1) φ (s + 1))/, dependng on wheher he axu n Eq. (7) s realzed by he frs or second quany. We do no know how o solve he recurrence n Eq. (7) so ha he bound φ 0 (0) gven n Theore can be pu n explc for. Neverheless, hs bound can easly be evaluaed nuercally, and he algorh can ceranly be pleened effcenly n s presen for. We have hus far been unable o solve he recurrence for he case ha B = [ 1, +1], even o a pon a whch he algorh can be pleened. However, hs case can be approxaed by he case n whch B ={/N : = N,...,N} for a oderae value (7)

19 DRIFTING GAMES 83 Fgure 4. A coparson of he bound φ 0 (0) for he drfng gaes assocaed wh AdaBoos (Secon 7.) and boos-by-aory (Secons 3 and 7.1). For AdaBoos, η s se as n Eq. (8). For boos-by-aory, he bound s ploed when B s { 1, +1}, { 1, 0, +1} and [ 1, +1]. (The laer case s approxaed by B = {/100 : = 100,...,100}.) The bound s ploed as a funcon of he nuber of rounds T. The drf paraeer s fxed o δ = 0.. (The agged naure of he B ={ 1,+1}curve s due o he fac ha gaes wh an even nuber of rounds n whch es coun as a loss for he shepherd so ha L(0) = 1 are harder han gaes wh an odd nuber of rounds.) of N. In he laer case, he poenal funcon and assocaed weghs can be copued nuercally. For nsance, lnear prograng can be used as dscussed n Secon 6.. Alernavely, can be shown ha Lea 5 cobned wh Theore 7 ples ha φ 1 (s) = ax{pφ (s + z 1 ) + (1 p)φ (s + z ) : z 1, z B, p [0, 1], pz 1 + (1 p)z = δ} whch can be evaluaed usng a sple search over all pars z 1, z (snce B s fne). Fgure 4 copares he bound φ 0 (0) for he drfng gaes assocaed wh boos-byaory and varans n whch B s { 1, +1}, { 1, 0, +1} and [ 1, +1] (usng he approxaon ha was us enoned), as well as AdaBoos (dscussed n he nex secon). These bounds are ploed as a funcon of he nuber of rounds T. 7.. AdaBoos and varans As enoned n Secon 3, a splfed, non-adapve verson of AdaBoos can be derved as an nsance of OS. To do hs, we sply replace he loss funcon (Eq. (7)) n he bnary boosng gae of Secon 3 wh an exponenal loss funcon L(s) = e ηs where η>0s a paraeer of he gae. As a specal case of he dscusson below, wll follow ha φ (s) = κ T e ηs

20 84 R. E. SCHAPIRE where κ s he consan κ = 1 δ eη δ e η. Also, he wegh gven o a chp a poson s on round s ( e κ T η e η ) e ηs whch s proporonal o e ηs (n oher words, he weghng funcon s effecvely unchanged fro round o round). Ths weghng s he sae as he one used by a non-adapve verson of AdaBoos n whch all weak hypoheses are gven equal wegh. Snce e ηs s an upper bound on he loss funcon of Eq. (7), Theore ples an upper bound on he fracon of sakes of he fnal hypohess of φ 0 (0) = κ T. When η = 1 ( ) 1 + δ ln 1 δ (8) so ha κ s nzed, hs gves an upper bound of (1 δ ) T/ = (1 4γ ) T/ whch s equvalen o a non-adapve verson of Freund and Schapre s (1997) analyss. We nex consder a ore general drfng gae n n densons whose loss funcon s a su of exponenals L(s) = k b exp( η u s) (9) =1 where he b s, η s and u s are paraeers wh b > 0, η > 0, u 1 = 1 and u σ 0 for soe sgn vecor σ. For hs gae, B = [ 1, +1] n and p = 1. Many (non-adapve) varans of AdaBoos correspond o specal cases of hs gae. For nsance, AdaBoos.M (Freund & Schapre, 1997), a ulclass verson of AdaBoos, essenally uses he loss funcon L(s) = n = e (η/)(s 1 s ) where we follow he ulclass seup of Secon 3 so ha n s he nuber of classes, and he frs coponen n he drfng gae s denfed wh he correc class. (As before, we

21 DRIFTING GAMES 85 only consder a non-adapve gae n whch η>0 s a fxed, unable paraeer.) Lkewse, AdaBoos.MH (Schapre & Snger, 1999), anoher ulclass verson of AdaBoos, uses he loss funcon L(s) = e ηs 1 + n e ηs. = Noe ha boh loss funcons upper bound he rue loss for ulclass boosng gven n Eq. (11). Moreover, boh funcons clearly have he for gven n Eq. (9). We cla ha, for he general gae wh loss funcon as n Eq. (9), φ (s) = b κ T exp( η u s) (30) where κ = 1 δ eη δ e η. Proof of Eq. (30) s by backwards nducon on. For fxed and s, le w = ( e b κ T η e η ) u exp( η u s). We wll show ha hs s he nzng wegh vecor ha ges used by OS for a chp a poson s a e. Le b = b κ T exp( η u s). Noe ha φ (s + z) + w z = b b ( ( e η e η ) ) exp( η u z) + u z ( e η + e η ) (31) snce ( e e ηx η + e η ) ( e η e η ) x for all η R and x [ 1, +1] by convexy of e ηx. Also, by our assupons on b, u and η, we can copue w 1 = ( e b η e η ). (3)

22 86 R. E. SCHAPIRE Thus, cobnng Eqs. (31) and (3) gves φ 1 (s) sup(φ (s + z) + w z δ w 1 ) z B = b κ b κ T +1 exp( η u s). Ths gves he needed upper bound on φ 1 (s). For he lower bound, usng Theore 7 (snce L s unae wh sgn vecor σ), we have φ 1 (s) n ax (φ (s + z) + w z δ w 1 ) w σ 0 z { σ, σ} { = n ax b c 0 e η + c δc, } b eη c δc where we have used u σ = 1 and w σ = w 1 (snce u σ 0 and w σ 0). We also have denfed c wh w 1. Solvng he n ax expresson gves he desred lower bound. Ths coplees he proof of Eq. (30) On-lne learnng algorhs In hs secon, we show how Cesa-Banch e al. s (1996) BW algorh for cobnng exper advce can be derved as an nsance of OS. We wll also see how her algorh can be generalzed, and how Llesone and Waruh s (1994) weghed aory algorh can also be derved and analyzed. Suppose ha we have access o expers. On each round, each exper provdes a predcon ξ { 1,+1}. A aser algorh cobnes her predcons no s own predcon ψ { 1,+1}. An oucoe y { 1,+1}s hen observed. The aser akes a sake f ψ y, and slarly for exper f ξ y. The goal of he aser s o nze how any sakes akes relave o he bes exper. We wll consder aser algorhs whch use a weghed aory voe o for her predcons; ha s, ψ = sgn ( =1 w ξ ). The proble s o derve a good choce of weghs w. We also assue ha he aser algorh s conservave n he sense ha rounds on whch he aser s predcons are correc are effecvely gnored (so ha he weghs w only depend upon prevous rounds on whch sakes were ade).

23 DRIFTING GAMES 87 Le us suppose ha here s one exper ha akes a os k sakes. We wll (re)derve an algorh (naely, BW) and a bound on he nuber of sakes ade by he aser, gven hs assupon. Snce we resrc our aenon o conservave algorhs, we can assue whou loss of generaly ha a sake occurs on every round and sply proceed o bound he oal nuber of rounds. To se up he proble as a drfng gae, we denfy one chp wh each of he expers. The proble s one densonal so n = 1. The weghs w seleced by he aser are he sae as hose chosen by he shepherd. Snce we assue ha he aser akes a sake on each round, we have for all ha y w ξ 0. (33) Thus, f we defne he drf z o be y ξ, hen w z 0. Seng δ = 0, we see ha Eq. (33) s equvalen o Eq. (1). Also, B ={ 1,+1}. Le M be he nuber of sakes ade by exper on rounds 1,..., 1. Then by defnon of z, s = M + 1. Le he loss funcon L be { 1 fs k T L(s) = 0 oherwse. (34) Then L(s T +1 ) = 1 f and only f exper akes a oal of k or fewer sakes n T rounds. Thus, our assupon ha he bes exper akes a os k sakes ples ha 1 L ( s T +1 ). (35) On he oher hand, Theore ples ha 1 L ( s T +1 ) φ0 (0). (36) By an analyss slar o he one gven n Secon 7.1, can be seen ha φ 1 (s) = 1 (φ (s + 1) + φ (s 1)).

24 88 R. E. SCHAPIRE Solvng hs recurrence gves ( ) T φ (s) = T k +s where ( ) n = k k ( ) n. k =0 In parcular, φ 0 (0) = T ( T k ). (37) Cobnng Eqs. (35) (37) gves ( ) 1 T T. (38) k In oher words, he nuber of sakes T of he aser algorh us sasfy Eq. (38) and so us be a os { ( )} q ax q N : q lg + lg, k he sae bound gven by Cesa-Banch e al. (1996). The weghng funcon obaned s also equvalen o hers snce, by a slar arguen o ha used n Secon 7.1, OS gves w = 1 ( ( φ s 1 ) ( φ s + 1 )) ( ) T = T 1 k +s 1 ( ) T = T 1 k M. Noe ha hs arguen can be generalzed o he case n whch he exper s predcons are no resrced o { 1, +1} bu nsead ay be all of [ 1, +1], or a subse of hs nerval, such as{ 1, 0, +1}. The perforance of each exper hen s easured on each round usng absolue loss 1 ξ y raher han wheher or no ade a sake. In hs case, as n he analogous exenson of boos-by-aory gven n Secon 3, we only need o replace B by [ 1, +1] or { 1, 0, +1}. The resulng bound on he nuber of sakes of he aser s

25 DRIFTING GAMES 89 hen he larges T for whch 1/ φ 0 (0) (noe ha φ 0 (0) depends plcly on T ). The resulng aser algorh sply uses he weghs copued by OS for he approprae drfng gae. I s an open proble o deerne f hs generalzed algorh enoys srong opaly properes slar o hose of BW (Cesa-Banch e al., 1996). Llesone and Waruh s (1994) weghed aory algorh can also be derved as an nsance of OS. To do hs, we sply replace he loss funcon L n he gae above wh L(s) = exp( η(s k + T)) for soe paraeer η>0. Ths loss funcon upper bounds he one n Eq. (34). We assue ha expers are pered o oupu predcons n [ 1, +1] so ha B = [ 1, +1]. Fro he resuls of Secon 7. appled o hs drfng gae, where φ (s) = κ T exp( η(s k + T)) κ = eη + e η. Therefore, because one exper suffers loss a os k, 1 φ 0(0) = κ T e η(k T). Equvalenly, he nuber of sakes T s a os ηk + ln ln ( ), 1+e η exacly he bound gven by Llesone and Waruh (1994). The algorh s also he sae as hers snce he wegh gven o an exper (chp) a poson s a e s ( e w η e η ) = exp ( η ( s k + T )) exp ( ηm ). 8. Open probles Ths paper represens he frs work on general drfng gaes. As such, here are any open probles. We have presened closed-for soluons of he poenal funcon for us a few specal cases. Are here oher cases n whch such closed-for soluons are possble? In parcular, can he boosng gaes of Secon 3 correspondng o B ={ 1,0,+1}and B = [ 1, +1] be pu no closed-for?

26 90 R. E. SCHAPIRE For gaes n whch a closed for s no possble, s here neverheless a general ehod of characerzng he loss bound φ 0 (0), say, as he nuber of rounds T ges large? Sde producs of our work nclude new versons of boos-by-aory for he ulclass case, as well as bnary cases n whch he weak hypoheses have range { 1, 0, +1} or [ 1, +1]. However, he opaly proof for he drfng gae only carres over o he boosng seng f he fnal hypohess has he resrced fors gven n Eqs. (4) and (10). Are he resulng boosng algorhs also opal (for nsance, n he sense proved by Freund (1995) for boos-by-aory) whou hese resrcons? Lkewse, can he exensons of he BW algorh n Secon 7.3 be shown o be opal? Can hs algorh be exended usng drfng gaes o he ulclass case, or o he case n whch he aser s allowed o oupu predcons n [ 1, +1] (sufferng absolue loss)? The OS algorh s non-adapve n he sense ha δ us be known ahead of e. To wha exen can OS be ade adapve? For nsance, can Freund s (001) recen echnque for akng boos-by-aory adapve be carred over o he general drfng-gae seng? Slarly, wha happens f he nuber of rounds T s no known n advance? Fnally, are here oher neresng drfng gaes for enrely dfferen learnng probles such as regresson or densy esaon? Acknowledgens Many hanks o Yoav Freund for very helpful dscussons whch led o hs research. Noes 1. In an earler verson of hs paper, he shepherd was called he drfer, a er ha was found by soe readers o be confusng. The nae of he an algorh has also been changed fro Shepherd o OS.. Of course, he real goal of a boosng algorh s o fnd a hypohess wh low generalzaon error. In hs paper, we only focus on he splfed proble of nzng error on he gven ranng exaples. References Blackwell, D. (1956). An analog of he nax heore for vecor payoffs. Pacfc Journal of Maheacs, 6:1, 1 8. Cesa-Banch, N., Freund, Y., Haussler, D., Helbold, D. P., Schapre, R. E., & Waruh, M. K. (1997). How o use exper advce. Journal of he Assocaon for Copung Machnery, 44:3, Cesa-Banch, N., Freund, Y., Helbold, D. P., & Waruh, M. K. (1996). On-lne predcon and converson sraeges. Machne Learnng, 5, Freund, Y. (1995). Boosng a weak learnng algorh by aory. Inforaon and Copuaon, 11:, Freund, Y. (001). An adapve verson of he boos by aory algorh. Machne Learnng, 43:3, Freund, Y. & Schapre, R. E. (1997). A decson-heorec generalzaon of on-lne learnng and an applcaon o boosng. Journal of Copuer and Syse Scences, 55:1, Hoeffdng, W. (1963). Probably nequales for sus of bounded rando varables. Journal of he Aercan Sascal Assocaon, 58:301, Llesone, N. & Waruh, M. K. (1994). The weghed aory algorh. Inforaon and Copuaon, 108, 1 61.

27 DRIFTING GAMES 91 Rockafellar, R. T. (1970). Convex Analyss. Prnceon, NJ: Prnceon Unversy Press. Schapre, R. E. & Snger, Y. (1999). Iproved boosng algorhs usng confdence-raed predcons. Machne Learnng, 37:3, Receved Ocober 8, 1999 Revsed Ocober 8, 1999 Acceped June 1, 000 Fnal anuscrp July 31, 000

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS R&RATA # Vol.) 8, March FURTHER AALYSIS OF COFIDECE ITERVALS FOR LARGE CLIET/SERVER COMPUTER ETWORKS Vyacheslav Abramov School of Mahemacal Scences, Monash Unversy, Buldng 8, Level 4, Clayon Campus, Wellngon

More information

Normal Random Variable and its discriminant functions

Normal Random Variable and its discriminant functions Noral Rando Varable and s dscrnan funcons Oulne Noral Rando Varable Properes Dscrnan funcons Why Noral Rando Varables? Analycally racable Works well when observaon coes for a corruped snle prooype 3 The

More information

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that THEORETICAL AUTOCORRELATIONS Cov( y, y ) E( y E( y))( y E( y)) ρ = = Var( y) E( y E( y)) =,, L ρ = and Cov( y, y ) s ofen denoed by whle Var( y ) f ofen denoed by γ. Noe ha γ = γ and ρ = ρ and because

More information

Solution in semi infinite diffusion couples (error function analysis)

Solution in semi infinite diffusion couples (error function analysis) Soluon n sem nfne dffuson couples (error funcon analyss) Le us consder now he sem nfne dffuson couple of wo blocks wh concenraon of and I means ha, n a A- bnary sysem, s bondng beween wo blocks made of

More information

( ) () we define the interaction representation by the unitary transformation () = ()

( ) () we define the interaction representation by the unitary transformation () = () Hgher Order Perurbaon Theory Mchael Fowler 3/7/6 The neracon Represenaon Recall ha n he frs par of hs course sequence, we dscussed he chrödnger and Hesenberg represenaons of quanum mechancs here n he chrödnger

More information

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015 /4/ Learnng Objecves Self Organzaon Map Learnng whou Exaples. Inroducon. MAXNET 3. Cluserng 4. Feaure Map. Self-organzng Feaure Map 6. Concluson 38 Inroducon. Learnng whou exaples. Daa are npu o he syse

More information

Introduction to Boosting

Introduction to Boosting Inroducon o Boosng Cynha Rudn PACM, Prnceon Unversy Advsors Ingrd Daubeches and Rober Schapre Say you have a daabase of news arcles, +, +, -, -, +, +, -, -, +, +, -, -, +, +, -, + where arcles are labeled

More information

Comparison of Differences between Power Means 1

Comparison of Differences between Power Means 1 In. Journal of Mah. Analyss, Vol. 7, 203, no., 5-55 Comparson of Dfferences beween Power Means Chang-An Tan, Guanghua Sh and Fe Zuo College of Mahemacs and Informaon Scence Henan Normal Unversy, 453007,

More information

Response of MDOF systems

Response of MDOF systems Response of MDOF syses Degree of freedo DOF: he nu nuber of ndependen coordnaes requred o deerne copleely he posons of all pars of a syse a any nsan of e. wo DOF syses hree DOF syses he noral ode analyss

More information

On One Analytic Method of. Constructing Program Controls

On One Analytic Method of. Constructing Program Controls Appled Mahemacal Scences, Vol. 9, 05, no. 8, 409-407 HIKARI Ld, www.m-hkar.com hp://dx.do.org/0.988/ams.05.54349 On One Analyc Mehod of Consrucng Program Conrols A. N. Kvko, S. V. Chsyakov and Yu. E. Balyna

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Lnear Response Theory: The connecon beween QFT and expermens 3.1. Basc conceps and deas Q: ow do we measure he conducvy of a meal? A: we frs nroduce a weak elecrc feld E, and hen measure

More information

Variants of Pegasos. December 11, 2009

Variants of Pegasos. December 11, 2009 Inroducon Varans of Pegasos SooWoong Ryu bshboy@sanford.edu December, 009 Youngsoo Cho yc344@sanford.edu Developng a new SVM algorhm s ongong research opc. Among many exng SVM algorhms, we wll focus on

More information

A DECOMPOSITION METHOD FOR SOLVING DIFFUSION EQUATIONS VIA LOCAL FRACTIONAL TIME DERIVATIVE

A DECOMPOSITION METHOD FOR SOLVING DIFFUSION EQUATIONS VIA LOCAL FRACTIONAL TIME DERIVATIVE S13 A DECOMPOSITION METHOD FOR SOLVING DIFFUSION EQUATIONS VIA LOCAL FRACTIONAL TIME DERIVATIVE by Hossen JAFARI a,b, Haleh TAJADODI c, and Sarah Jane JOHNSTON a a Deparen of Maheacal Scences, Unversy

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4 CS434a/54a: Paern Recognon Prof. Olga Veksler Lecure 4 Oulne Normal Random Varable Properes Dscrmnan funcons Why Normal Random Varables? Analycally racable Works well when observaon comes form a corruped

More information

CS286.2 Lecture 14: Quantum de Finetti Theorems II

CS286.2 Lecture 14: Quantum de Finetti Theorems II CS286.2 Lecure 14: Quanum de Fne Theorems II Scrbe: Mara Okounkova 1 Saemen of he heorem Recall he las saemen of he quanum de Fne heorem from he prevous lecure. Theorem 1 Quanum de Fne). Le ρ Dens C 2

More information

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim Korean J. Mah. 19 (2011), No. 3, pp. 263 272 GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS Youngwoo Ahn and Kae Km Absrac. In he paper [1], an explc correspondence beween ceran

More information

THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY, Series A, OF THE ROMANIAN ACADEMY Volume 9, Number 1/2008, pp

THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY, Series A, OF THE ROMANIAN ACADEMY Volume 9, Number 1/2008, pp THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMNIN CDEMY, Seres, OF THE ROMNIN CDEMY Volue 9, Nuber /008, pp. 000 000 ON CIMMINO'S REFLECTION LGORITHM Consann POP Ovdus Unversy of Consana, Roana, E-al: cpopa@unv-ovdus.ro

More information

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!) i+1,q - [(! ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL The frs hng o es n wo-way ANOVA: Is here neracon? "No neracon" means: The man effecs model would f. Ths n urn means: In he neracon plo (wh A on he horzonal

More information

Department of Economics University of Toronto

Department of Economics University of Toronto Deparmen of Economcs Unversy of Torono ECO408F M.A. Economercs Lecure Noes on Heeroskedascy Heeroskedascy o Ths lecure nvolves lookng a modfcaons we need o make o deal wh he regresson model when some of

More information

. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue.

. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue. Lnear Algebra Lecure # Noes We connue wh he dscusson of egenvalues, egenvecors, and dagonalzably of marces We wan o know, n parcular wha condons wll assure ha a marx can be dagonalzed and wha he obsrucons

More information

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany Herarchcal Markov Normal Mxure models wh Applcaons o Fnancal Asse Reurns Appendx: Proofs of Theorems and Condonal Poseror Dsrbuons John Geweke a and Gann Amsano b a Deparmens of Economcs and Sascs, Unversy

More information

Epistemic Game Theory: Online Appendix

Epistemic Game Theory: Online Appendix Epsemc Game Theory: Onlne Appendx Edde Dekel Lucano Pomao Marcano Snscalch July 18, 2014 Prelmnares Fx a fne ype srucure T I, S, T, β I and a probably µ S T. Le T µ I, S, T µ, βµ I be a ype srucure ha

More information

1 Definition of Rademacher Complexity

1 Definition of Rademacher Complexity COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #9 Scrbe: Josh Chen March 5, 2013 We ve spent the past few classes provng bounds on the generalzaton error of PAClearnng algorths for the

More information

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5 TPG460 Reservor Smulaon 08 page of 5 DISCRETIZATIO OF THE FOW EQUATIOS As we already have seen, fne dfference appromaons of he paral dervaves appearng n he flow equaons may be obaned from Taylor seres

More information

Let s treat the problem of the response of a system to an applied external force. Again,

Let s treat the problem of the response of a system to an applied external force. Again, Page 33 QUANTUM LNEAR RESPONSE FUNCTON Le s rea he problem of he response of a sysem o an appled exernal force. Agan, H() H f () A H + V () Exernal agen acng on nernal varable Hamlonan for equlbrum sysem

More information

( ) [ ] MAP Decision Rule

( ) [ ] MAP Decision Rule Announcemens Bayes Decson Theory wh Normal Dsrbuons HW0 due oday HW o be assgned soon Proec descrpon posed Bomercs CSE 90 Lecure 4 CSE90, Sprng 04 CSE90, Sprng 04 Key Probables 4 ω class label X feaure

More information

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD Journal of Appled Mahemacs and Compuaonal Mechancs 3, (), 45-5 HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD Sansław Kukla, Urszula Sedlecka Insue of Mahemacs,

More information

Lecture 18: The Laplace Transform (See Sections and 14.7 in Boas)

Lecture 18: The Laplace Transform (See Sections and 14.7 in Boas) Lecure 8: The Lalace Transform (See Secons 88- and 47 n Boas) Recall ha our bg-cure goal s he analyss of he dfferenal equaon, ax bx cx F, where we emloy varous exansons for he drvng funcon F deendng on

More information

Notes on the stability of dynamic systems and the use of Eigen Values.

Notes on the stability of dynamic systems and the use of Eigen Values. Noes on he sabl of dnamc ssems and he use of Egen Values. Source: Macro II course noes, Dr. Davd Bessler s Tme Seres course noes, zarads (999) Ineremporal Macroeconomcs chaper 4 & Techncal ppend, and Hamlon

More information

Testing a new idea to solve the P = NP problem with mathematical induction

Testing a new idea to solve the P = NP problem with mathematical induction Tesng a new dea o solve he P = NP problem wh mahemacal nducon Bacground P and NP are wo classes (ses) of languages n Compuer Scence An open problem s wheher P = NP Ths paper ess a new dea o compare he

More information

. The geometric multiplicity is dim[ker( λi. A )], i.e. the number of linearly independent eigenvectors associated with this eigenvalue.

. The geometric multiplicity is dim[ker( λi. A )], i.e. the number of linearly independent eigenvectors associated with this eigenvalue. Mah E-b Lecure #0 Noes We connue wh he dscusson of egenvalues, egenvecors, and dagonalzably of marces We wan o know, n parcular wha condons wll assure ha a marx can be dagonalzed and wha he obsrucons are

More information

COS 511: Theoretical Machine Learning

COS 511: Theoretical Machine Learning COS 5: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #0 Scrbe: José Sões Ferrera March 06, 203 In the last lecture the concept of Radeacher coplexty was ntroduced, wth the goal of showng that

More information

A Modified Genetic Algorithm Comparable to Quantum GA

A Modified Genetic Algorithm Comparable to Quantum GA A Modfed Genec Algorh Coparable o Quanu GA Tahereh Kahookar Toos Ferdows Unversy of Mashhad _k_oos@wal.u.ac.r Habb Rajab Mashhad Ferdows Unversy of Mashhad h_rajab@ferdows.u.ac.r Absrac: Recenly, researchers

More information

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model Probablsc Model for Tme-seres Daa: Hdden Markov Model Hrosh Mamsuka Bonformacs Cener Kyoo Unversy Oulne Three Problems for probablsc models n machne learnng. Compung lkelhood 2. Learnng 3. Parsng (predcon

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 9 Hamlonan Equaons of Moon (Chaper 8) Wha We Dd Las Tme Consruced Hamlonan formalsm H ( q, p, ) = q p L( q, q, ) H p = q H q = p H = L Equvalen o Lagrangan formalsm Smpler, bu

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 9 Hamlonan Equaons of Moon (Chaper 8) Wha We Dd Las Tme Consruced Hamlonan formalsm Hqp (,,) = qp Lqq (,,) H p = q H q = p H L = Equvalen o Lagrangan formalsm Smpler, bu wce as

More information

GMM parameter estimation. Xiaoye Lu CMPS290c Final Project

GMM parameter estimation. Xiaoye Lu CMPS290c Final Project GMM paraeer esaon Xaoye Lu M290c Fnal rojec GMM nroducon Gaussan ure Model obnaon of several gaussan coponens Noaon: For each Gaussan dsrbuon:, s he ean and covarance ar. A GMM h ures(coponens): p ( 2π

More information

The Isotron Algorithm: High-Dimensional Isotonic Regression

The Isotron Algorithm: High-Dimensional Isotonic Regression The Isoron Algorh: Hgh-Densonal Isoonc Regresson Ada Tauan Kala Mcrosof Research One Meoral Drve Cabrdge, MA Rav Sasry College of Copung eorga Tech Alana, A Absrac The Percepron algorh eleganly solves

More information

F-Tests and Analysis of Variance (ANOVA) in the Simple Linear Regression Model. 1. Introduction

F-Tests and Analysis of Variance (ANOVA) in the Simple Linear Regression Model. 1. Introduction ECOOMICS 35* -- OTE 9 ECO 35* -- OTE 9 F-Tess and Analyss of Varance (AOVA n he Smple Lnear Regresson Model Inroducon The smple lnear regresson model s gven by he followng populaon regresson equaon, or

More information

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model BGC1: Survval and even hsory analyss Oslo, March-May 212 Monday May 7h and Tuesday May 8h The addve regresson model Ørnulf Borgan Deparmen of Mahemacs Unversy of Oslo Oulne of program: Recapulaon Counng

More information

ON THE WEAK LIMITS OF SMOOTH MAPS FOR THE DIRICHLET ENERGY BETWEEN MANIFOLDS

ON THE WEAK LIMITS OF SMOOTH MAPS FOR THE DIRICHLET ENERGY BETWEEN MANIFOLDS ON THE WEA LIMITS OF SMOOTH MAPS FOR THE DIRICHLET ENERGY BETWEEN MANIFOLDS FENGBO HANG Absrac. We denfy all he weak sequenal lms of smooh maps n W (M N). In parcular, hs mples a necessary su cen opologcal

More information

Robust and Accurate Cancer Classification with Gene Expression Profiling

Robust and Accurate Cancer Classification with Gene Expression Profiling Robus and Accurae Cancer Classfcaon wh Gene Expresson Proflng (Compuaonal ysems Bology, 2005) Auhor: Hafeng L, Keshu Zhang, ao Jang Oulne Background LDA (lnear dscrmnan analyss) and small sample sze problem

More information

Scattering at an Interface: Oblique Incidence

Scattering at an Interface: Oblique Incidence Course Insrucor Dr. Raymond C. Rumpf Offce: A 337 Phone: (915) 747 6958 E Mal: rcrumpf@uep.edu EE 4347 Appled Elecromagnecs Topc 3g Scaerng a an Inerface: Oblque Incdence Scaerng These Oblque noes may

More information

FTCS Solution to the Heat Equation

FTCS Solution to the Heat Equation FTCS Soluon o he Hea Equaon ME 448/548 Noes Gerald Reckenwald Porland Sae Unversy Deparmen of Mechancal Engneerng gerry@pdxedu ME 448/548: FTCS Soluon o he Hea Equaon Overvew Use he forward fne d erence

More information

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION INTERNATIONAL TRADE T. J. KEHOE UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 27 EXAMINATION Please answer wo of he hree quesons. You can consul class noes, workng papers, and arcles whle you are workng on he

More information

Existence and Uniqueness Results for Random Impulsive Integro-Differential Equation

Existence and Uniqueness Results for Random Impulsive Integro-Differential Equation Global Journal of Pure and Appled Mahemacs. ISSN 973-768 Volume 4, Number 6 (8), pp. 89-87 Research Inda Publcaons hp://www.rpublcaon.com Exsence and Unqueness Resuls for Random Impulsve Inegro-Dfferenal

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 0 Canoncal Transformaons (Chaper 9) Wha We Dd Las Tme Hamlon s Prncple n he Hamlonan formalsm Dervaon was smple δi δ Addonal end-pon consrans pq H( q, p, ) d 0 δ q ( ) δq ( ) δ

More information

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005 Dynamc Team Decson Theory EECS 558 Proec Shruvandana Sharma and Davd Shuman December 0, 005 Oulne Inroducon o Team Decson Theory Decomposon of he Dynamc Team Decson Problem Equvalence of Sac and Dynamc

More information

Excess Error, Approximation Error, and Estimation Error

Excess Error, Approximation Error, and Estimation Error E0 370 Statstcal Learnng Theory Lecture 10 Sep 15, 011 Excess Error, Approxaton Error, and Estaton Error Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton So far, we have consdered the fnte saple

More information

How about the more general "linear" scalar functions of scalars (i.e., a 1st degree polynomial of the following form with a constant term )?

How about the more general linear scalar functions of scalars (i.e., a 1st degree polynomial of the following form with a constant term )? lmcd Lnear ransformaon of a vecor he deas presened here are que general hey go beyond he radonal mar-vecor ype seen n lnear algebra Furhermore, hey do no deal wh bass and are equally vald for any se of

More information

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s Ordnary Dfferenal Equaons n Neuroscence wh Malab eamples. Am - Gan undersandng of how o se up and solve ODE s Am Undersand how o se up an solve a smple eample of he Hebb rule n D Our goal a end of class

More information

Changeovers. Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA

Changeovers. Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA wo ew Connuous-e odels for he Schedulng of ulsage Bach Plans wh Sequence Dependen Changeovers Pedro. Casro * gnaco E. Grossann and Auguso Q. ovas Deparaeno de odelação e Sulação de Processos E 649-038

More information

Comb Filters. Comb Filters

Comb Filters. Comb Filters The smple flers dscussed so far are characered eher by a sngle passband and/or a sngle sopband There are applcaons where flers wh mulple passbands and sopbands are requred Thecomb fler s an example of

More information

Li An-Ping. Beijing , P.R.China

Li An-Ping. Beijing , P.R.China A New Type of Cpher: DICING_csb L An-Png Bejng 100085, P.R.Chna apl0001@sna.com Absrac: In hs paper, we wll propose a new ype of cpher named DICING_csb, whch s derved from our prevous sream cpher DICING.

More information

TSS = SST + SSE An orthogonal partition of the total SS

TSS = SST + SSE An orthogonal partition of the total SS ANOVA: Topc 4. Orhogonal conrass [ST&D p. 183] H 0 : µ 1 = µ =... = µ H 1 : The mean of a leas one reamen group s dfferen To es hs hypohess, a basc ANOVA allocaes he varaon among reamen means (SST) equally

More information

Relative controllability of nonlinear systems with delays in control

Relative controllability of nonlinear systems with delays in control Relave conrollably o nonlnear sysems wh delays n conrol Jerzy Klamka Insue o Conrol Engneerng, Slesan Techncal Unversy, 44- Glwce, Poland. phone/ax : 48 32 37227, {jklamka}@a.polsl.glwce.pl Keywor: Conrollably.

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machne Learnng & Percepon Insrucor: Tony Jebara SVM Feaure & Kernel Selecon SVM Eensons Feaure Selecon (Flerng and Wrappng) SVM Feaure Selecon SVM Kernel Selecon SVM Eensons Classfcaon Feaure/Kernel

More information

Supporting information How to concatenate the local attractors of subnetworks in the HPFP

Supporting information How to concatenate the local attractors of subnetworks in the HPFP n Effcen lgorh for Idenfyng Prry Phenoype rcors of Lrge-Scle Boolen Newor Sng-Mo Choo nd Kwng-Hyun Cho Depren of Mhecs Unversy of Ulsn Ulsn 446 Republc of Kore Depren of Bo nd Brn Engneerng Kore dvnced

More information

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Labs. 180 Park Avenue. Florham Park, NJ 07932

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Labs. 180 Park Avenue. Florham Park, NJ 07932 A decson-heorec generalzaon of on-lne learnng and an applcaon o boosng Yoav Freund Rober E. Schapre AT&T Labs 80 Park Avenue Florham Park, NJ 07932 fyoav, schapreg@research.a.com December 9, 996 Absrac

More information

SOME NOISELESS CODING THEOREMS OF INACCURACY MEASURE OF ORDER α AND TYPE β

SOME NOISELESS CODING THEOREMS OF INACCURACY MEASURE OF ORDER α AND TYPE β SARAJEVO JOURNAL OF MATHEMATICS Vol.3 (15) (2007), 137 143 SOME NOISELESS CODING THEOREMS OF INACCURACY MEASURE OF ORDER α AND TYPE β M. A. K. BAIG AND RAYEES AHMAD DAR Absrac. In hs paper, we propose

More information

Lecture 11 SVM cont

Lecture 11 SVM cont Lecure SVM con. 0 008 Wha we have done so far We have esalshed ha we wan o fnd a lnear decson oundary whose margn s he larges We know how o measure he margn of a lnear decson oundary Tha s: he mnmum geomerc

More information

Tight results for Next Fit and Worst Fit with resource augmentation

Tight results for Next Fit and Worst Fit with resource augmentation Tgh resuls for Nex F and Wors F wh resource augmenaon Joan Boyar Leah Epsen Asaf Levn Asrac I s well known ha he wo smple algorhms for he classc n packng prolem, NF and WF oh have an approxmaon rao of

More information

P R = P 0. The system is shown on the next figure:

P R = P 0. The system is shown on the next figure: TPG460 Reservor Smulaon 08 page of INTRODUCTION TO RESERVOIR SIMULATION Analycal and numercal soluons of smple one-dmensonal, one-phase flow equaons As an nroducon o reservor smulaon, we wll revew he smples

More information

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL Sco Wsdom, John Hershey 2, Jonahan Le Roux 2, and Shnj Waanabe 2 Deparmen o Elecrcal Engneerng, Unversy o Washngon, Seale, WA, USA

More information

2.1 Constitutive Theory

2.1 Constitutive Theory Secon.. Consuve Theory.. Consuve Equaons Governng Equaons The equaons governng he behavour of maerals are (n he spaal form) dρ v & ρ + ρdv v = + ρ = Conservaon of Mass (..a) d x σ j dv dvσ + b = ρ v& +

More information

Volatility Interpolation

Volatility Interpolation Volaly Inerpolaon Prelmnary Verson March 00 Jesper Andreasen and Bran Huge Danse Mares, Copenhagen wan.daddy@danseban.com brno@danseban.com Elecronc copy avalable a: hp://ssrn.com/absrac=69497 Inro Local

More information

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS THE PREICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS INTROUCTION The wo dmensonal paral dfferenal equaons of second order can be used for he smulaon of compeve envronmen n busness The arcle presens he

More information

Cubic Bezier Homotopy Function for Solving Exponential Equations

Cubic Bezier Homotopy Function for Solving Exponential Equations Penerb Journal of Advanced Research n Compung and Applcaons ISSN (onlne: 46-97 Vol. 4, No.. Pages -8, 6 omoopy Funcon for Solvng Eponenal Equaons S. S. Raml *,,. Mohamad Nor,a, N. S. Saharzan,b and M.

More information

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Bell Laboratories. 600 Mountain Avenue

A decision-theoretic generalization of on-line learning. and an application to boosting. AT&T Bell Laboratories. 600 Mountain Avenue A decson-heorec generalzaon of on-lne learnng and an applcaon o boosng Yoav Freund Rober E. Schapre AT&T Bell Laboraores 600 Mounan Avenue Room f2b-428, 2A-424g Murray Hll, NJ 07974-0636 fyoav, schapreg@research.a.com

More information

Should Exact Index Numbers have Standard Errors? Theory and Application to Asian Growth

Should Exact Index Numbers have Standard Errors? Theory and Application to Asian Growth Should Exac Index umbers have Sandard Errors? Theory and Applcaon o Asan Growh Rober C. Feensra Marshall B. Rensdorf ovember 003 Proof of Proposon APPEDIX () Frs, we wll derve he convenonal Sao-Vara prce

More information

@FMI c Kyung Moon Sa Co.

@FMI c Kyung Moon Sa Co. Annals of Fuzzy Mahemacs and Informacs Volume 8, No. 2, (Augus 2014), pp. 245 257 ISSN: 2093 9310 (prn verson) ISSN: 2287 6235 (elecronc verson) hp://www.afm.or.kr @FMI c Kyung Moon Sa Co. hp://www.kyungmoon.com

More information

FI 3103 Quantum Physics

FI 3103 Quantum Physics /9/4 FI 33 Quanum Physcs Aleander A. Iskandar Physcs of Magnesm and Phooncs Research Grou Insu Teknolog Bandung Basc Conces n Quanum Physcs Probably and Eecaon Value Hesenberg Uncerany Prncle Wave Funcon

More information

Periodic motions of a class of forced infinite lattices with nearest neighbor interaction

Periodic motions of a class of forced infinite lattices with nearest neighbor interaction J. Mah. Anal. Appl. 34 28 44 52 www.elsever.co/locae/jaa Peroc oons of a class of force nfne laces wh neares neghbor neracon Chao Wang a,b,, Dngban Qan a a School of Maheacal Scence, Suzhou Unversy, Suzhou

More information

Chapter Lagrangian Interpolation

Chapter Lagrangian Interpolation Chaper 5.4 agrangan Inerpolaon Afer readng hs chaper you should be able o:. dere agrangan mehod of nerpolaon. sole problems usng agrangan mehod of nerpolaon and. use agrangan nerpolans o fnd deraes and

More information

Online Supplement for Dynamic Multi-Technology. Production-Inventory Problem with Emissions Trading

Online Supplement for Dynamic Multi-Technology. Production-Inventory Problem with Emissions Trading Onlne Supplemen for Dynamc Mul-Technology Producon-Invenory Problem wh Emssons Tradng by We Zhang Zhongsheng Hua Yu Xa and Baofeng Huo Proof of Lemma For any ( qr ) Θ s easy o verfy ha he lnear programmng

More information

Survival Analysis and Reliability. A Note on the Mean Residual Life Function of a Parallel System

Survival Analysis and Reliability. A Note on the Mean Residual Life Function of a Parallel System Communcaons n Sascs Theory and Mehods, 34: 475 484, 2005 Copyrgh Taylor & Francs, Inc. ISSN: 0361-0926 prn/1532-415x onlne DOI: 10.1081/STA-200047430 Survval Analyss and Relably A Noe on he Mean Resdual

More information

Supplementary Online Material

Supplementary Online Material Suppleenary Onlne Maeral In he followng secons, we presen our approach o calculang yapunov exponens. We derve our cenral resul Λ= τ n n pτλ ( A pbt λ( = τ, = A ( drecly fro he growh equaon x ( = AE x (

More information

CH.3. COMPATIBILITY EQUATIONS. Continuum Mechanics Course (MMC) - ETSECCPB - UPC

CH.3. COMPATIBILITY EQUATIONS. Continuum Mechanics Course (MMC) - ETSECCPB - UPC CH.3. COMPATIBILITY EQUATIONS Connuum Mechancs Course (MMC) - ETSECCPB - UPC Overvew Compably Condons Compably Equaons of a Poenal Vecor Feld Compably Condons for Infnesmal Srans Inegraon of he Infnesmal

More information

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data Anne Chao Ncholas J Goell C seh lzabeh L ander K Ma Rober K Colwell and Aaron M llson 03 Rarefacon and erapolaon wh ll numbers: a framewor for samplng and esmaon n speces dversy sudes cology Monographs

More information

e-journal Reliability: Theory& Applications No 2 (Vol.2) Vyacheslav Abramov

e-journal Reliability: Theory& Applications No 2 (Vol.2) Vyacheslav Abramov June 7 e-ournal Relably: Theory& Applcaons No (Vol. CONFIDENCE INTERVALS ASSOCIATED WITH PERFORMANCE ANALYSIS OF SYMMETRIC LARGE CLOSED CLIENT/SERVER COMPUTER NETWORKS Absrac Vyacheslav Abramov School

More information

Advanced Macroeconomics II: Exchange economy

Advanced Macroeconomics II: Exchange economy Advanced Macroeconomcs II: Exchange economy Krzyszof Makarsk 1 Smple deermnsc dynamc model. 1.1 Inroducon Inroducon Smple deermnsc dynamc model. Defnons of equlbrum: Arrow-Debreu Sequenal Recursve Equvalence

More information

Part II CONTINUOUS TIME STOCHASTIC PROCESSES

Part II CONTINUOUS TIME STOCHASTIC PROCESSES Par II CONTINUOUS TIME STOCHASTIC PROCESSES 4 Chaper 4 For an advanced analyss of he properes of he Wener process, see: Revus D and Yor M: Connuous marngales and Brownan Moon Karazas I and Shreve S E:

More information

NPTEL Project. Econometric Modelling. Module23: Granger Causality Test. Lecture35: Granger Causality Test. Vinod Gupta School of Management

NPTEL Project. Econometric Modelling. Module23: Granger Causality Test. Lecture35: Granger Causality Test. Vinod Gupta School of Management P age NPTEL Proec Economerc Modellng Vnod Gua School of Managemen Module23: Granger Causaly Tes Lecure35: Granger Causaly Tes Rudra P. Pradhan Vnod Gua School of Managemen Indan Insue of Technology Kharagur,

More information

Lecture 6: Learning for Control (Generalised Linear Regression)

Lecture 6: Learning for Control (Generalised Linear Regression) Lecure 6: Learnng for Conrol (Generalsed Lnear Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure 6: RLSC - Prof. Sehu Vjayakumar Lnear Regresson

More information

Density Matrix Description of NMR BCMB/CHEM 8190

Density Matrix Description of NMR BCMB/CHEM 8190 Densy Marx Descrpon of NMR BCMBCHEM 89 Operaors n Marx Noaon Alernae approach o second order specra: ask abou x magnezaon nsead of energes and ranson probables. If we say wh one bass se, properes vary

More information

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy Arcle Inernaonal Journal of Modern Mahemacal Scences, 4, (): - Inernaonal Journal of Modern Mahemacal Scences Journal homepage: www.modernscenfcpress.com/journals/jmms.aspx ISSN: 66-86X Florda, USA Approxmae

More information

5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015)

5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015) 5h Inernaonal onference on Advanced Desgn and Manufacurng Engneerng (IADME 5 The Falure Rae Expermenal Sudy of Specal N Machne Tool hunshan He, a, *, La Pan,b and Bng Hu 3,c,,3 ollege of Mechancal and

More information

General Weighted Majority, Online Learning as Online Optimization

General Weighted Majority, Online Learning as Online Optimization Sascal Technques n Robocs (16-831, F10) Lecure#10 (Thursday Sepember 23) General Weghed Majory, Onlne Learnng as Onlne Opmzaon Lecurer: Drew Bagnell Scrbe: Nahanel Barshay 1 1 Generalzed Weghed majory

More information

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes.

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes. umercal negraon of he dffuson equaon (I) Fne dfference mehod. Spaal screaon. Inernal nodes. R L V For hermal conducon le s dscree he spaal doman no small fne spans, =,,: Balance of parcles for an nernal

More information

Computational and Statistical Learning theory Assignment 4

Computational and Statistical Learning theory Assignment 4 Coputatonal and Statstcal Learnng theory Assgnent 4 Due: March 2nd Eal solutons to : karthk at ttc dot edu Notatons/Defntons Recall the defnton of saple based Radeacher coplexty : [ ] R S F) := E ɛ {±}

More information

Online Appendix for. Strategic safety stocks in supply chains with evolving forecasts

Online Appendix for. Strategic safety stocks in supply chains with evolving forecasts Onlne Appendx for Sraegc safey socs n supply chans wh evolvng forecass Tor Schoenmeyr Sephen C. Graves Opsolar, Inc. 332 Hunwood Avenue Hayward, CA 94544 A. P. Sloan School of Managemen Massachuses Insue

More information

A TWO-LEVEL LOAN PORTFOLIO OPTIMIZATION PROBLEM

A TWO-LEVEL LOAN PORTFOLIO OPTIMIZATION PROBLEM Proceedngs of he 2010 Wner Sulaon Conference B. Johansson, S. Jan, J. Monoya-Torres, J. Hugan, and E. Yücesan, eds. A TWO-LEVEL LOAN PORTFOLIO OPTIMIZATION PROBLEM JanQang Hu Jun Tong School of Manageen

More information

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6)

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6) Econ7 Appled Economercs Topc 5: Specfcaon: Choosng Independen Varables (Sudenmund, Chaper 6 Specfcaon errors ha we wll deal wh: wrong ndependen varable; wrong funconal form. Ths lecure deals wh wrong ndependen

More information

An introduction to Support Vector Machine

An introduction to Support Vector Machine An nroducon o Suppor Vecor Machne 報告者 : 黃立德 References: Smon Haykn, "Neural Neworks: a comprehensve foundaon, second edon, 999, Chaper 2,6 Nello Chrsann, John Shawe-Tayer, An Inroducon o Suppor Vecor Machnes,

More information

Density Matrix Description of NMR BCMB/CHEM 8190

Density Matrix Description of NMR BCMB/CHEM 8190 Densy Marx Descrpon of NMR BCMBCHEM 89 Operaors n Marx Noaon If we say wh one bass se, properes vary only because of changes n he coeffcens weghng each bass se funcon x = h< Ix > - hs s how we calculae

More information

Lecture VI Regression

Lecture VI Regression Lecure VI Regresson (Lnear Mehods for Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure VI: MLSC - Dr. Sehu Vjayakumar Lnear Regresson Model M

More information

CHAPTER 10: LINEAR DISCRIMINATION

CHAPTER 10: LINEAR DISCRIMINATION CHAPER : LINEAR DISCRIMINAION Dscrmnan-based Classfcaon 3 In classfcaon h K classes (C,C,, C k ) We defned dscrmnan funcon g j (), j=,,,k hen gven an es eample, e chose (predced) s class label as C f g

More information

Homework 8: Rigid Body Dynamics Due Friday April 21, 2017

Homework 8: Rigid Body Dynamics Due Friday April 21, 2017 EN40: Dynacs and Vbraons Hoework 8: gd Body Dynacs Due Frday Aprl 1, 017 School of Engneerng Brown Unversy 1. The earh s roaon rae has been esaed o decrease so as o ncrease he lengh of a day a a rae of

More information

Graduate Macroeconomics 2 Problem set 5. - Solutions

Graduate Macroeconomics 2 Problem set 5. - Solutions Graduae Macroeconomcs 2 Problem se. - Soluons Queson 1 To answer hs queson we need he frms frs order condons and he equaon ha deermnes he number of frms n equlbrum. The frms frs order condons are: F K

More information

System in Weibull Distribution

System in Weibull Distribution Internatonal Matheatcal Foru 4 9 no. 9 94-95 Relablty Equvalence Factors of a Seres-Parallel Syste n Webull Dstrbuton M. A. El-Dacese Matheatcs Departent Faculty of Scence Tanta Unversty Tanta Egypt eldacese@yahoo.co

More information