Approximation Lasso Methods for Language Modeling

Size: px
Start display at page:

Download "Approximation Lasso Methods for Language Modeling"

Transcription

1 Approxmaon Lasso Mehods for Language Modelng Janfeng Gao Mcrosof Research One Mcrosof Way Redmond WA USA Hsam Suzuk Mcrosof Research One Mcrosof Way Redmond WA USA Bn Yu Deparmen of Sascs Unversy of Calforna Berkeley., CA U.S.A. Absrac Lasso s a regularzaon mehod for parameer esmaon n lnear models. I opmzes he model parameers wh respec o a loss funcon subjec o model complexes. Ths paper explores he use of lasso for sascal language modelng for ex npu. Owng o he very large number of parameers, drecly opmzng he penalzed lasso loss funcon s mpossble. Therefore, we nvesgae wo approxmaon mehods, he boosed lasso (BLasso) and he forward sagewse lnear regresson (FSLR). Boh mehods, when used wh he exponenal loss funcon, bear srong resemblance o he boosng algorhm whch has been used as a dscrmnave ranng mehod for language modelng. Evaluaons on he ask of Japanese ex npu show ha BLasso s able o produce he bes approxmaon o he lasso soluon, and leads o a sgnfcan mprovemen, n erms of characer error rae, over boosng and he radonal maxmum lkelhood esmaon. 1 Inroducon Language modelng (LM) s fundamenal o a wde range of applcaons. Recenly, has been shown ha a lnear model esmaed usng dscrmnave ranng mehods, such as he boosng and percepron algorhms, ouperforms sgnfcanly a radonal word rgram model raned usng maxmum lkelhood esmaon (MLE) on several asks such as speech recognon and Asan language ex npu (Bacchan e al. 2004; Roark e al. 2004; Gao e al. 2005; Suzuk and Gao 2005). The success of dscrmnave ranng mehods s largely due o fac ha unlke he radonal approach (e.g., MLE) ha maxmzes he funcon (e.g., lkelhood of ranng daa) ha s loosely assocaed wh error rae, dscrmnave ranng mehods am o drecly mnmze he error rae on ranng daa even f hey reduce he lkelhood. However, gven a fne se of ranng samples, dscrmnave ranng mehods could lead o an arbrary complex model for he purpose of achevng zero ranng error. I s well-known ha complex models exhb hgh varance and perform poorly on unseen daa. Therefore some regularzaon mehods have o be used o conrol he complexy of he model. Lasso s a regularzaon mehod for parameer esmaon n lnear models. I opmzes he model parameers wh respec o a loss funcon subjec o model complexes. The basc dea of lasso s orgnally proposed by Tbshran (1996). Recenly, here have been several mplemenaons and expermens of lasso on mul-class classfcaon asks where only a small number of feaures need o be handled and he lasso soluon can be drecly compued va numercal mehods. To our knowledge, hs paper presens he frs emprcal sudy of lasso for a realsc, large scale ask: LM for Asan language ex npu. Because he ask ulzes mllons of feaures and ranng samples, drecly opmzng he penalzed lasso loss funcon s mpossble. Therefore, wo approxmaon mehods, he boosed lasso (BLasso, Zhao and Yu 2004) and he forward sagewse lnear regresson (FSLR, Hase e al. 2001), are nvesgaed. Boh mehods, when used wh he exponenal loss funcon, bear srong resemblance o he boosng algorhm whch has been used as a dscrmnave ranng mehod for LM. Evaluaons on he ask of Japanese ex npu show ha BLasso s able o produce he bes approxmaon o he lasso soluon, and leads o a sgnfcan mprovemen, n erms of characer error rae, over he boosng algorhm and he radonal MLE. 2 LM Task and Problem Defnon Ths paper sudes LM on he applcaon of Asan language (e.g. Chnese or Japanese) ex npu, a sandard mehod of npung Chnese or Japanese ex by converng he npu phonec symbols no he approprae word srng. In hs paper we call he ask IME, whch sands for

2 npu mehod edor, based on he name of he commonly used Wndows-based applcaon. Performance on IME s measured n erms of he characer error rae (CER), whch s he number of characers wrongly convered from he phonec srng dvded by he number of characers n he correc ranscrp. Smlar o speech recognon, IME s vewed as a Bayes decson problem. Le A be he npu phonec srng. An IME sysem s ask s o choose he mos lkely word srng W * among hose canddaes ha could be convered from A: W * = arg maxp( W A) = arg maxp( W ) P( A W) (1) W GEN( A) W GEN(A) where GEN(A) denoes he canddae se gven A. Unlke speech recognon, however, here s no acousc ambguy as he phonec srng s npued by users. Moreover, we can assume a unque mappng from W and A n IME as words have unque readngs,.e. P(A W) = 1. So he decson of Equaon (1) depends solely upon P(W), makng IME an deal evaluaon es bed for LM. In hs sudy, he LM ask for IME s formulaed under he framework of lnear models (e.g., Duda e al. 2001). We use he followng noaon, adaped from Collns and Koo (2005): Tranng daa s a se of example npu/oupu pars. In LM for IME, ranng samples are represened as {A, W R }, for = 1 M, where each A s an npu phonec srng and W R s he reference ranscrp of A. We assume some way of generang a se of canddae word srngs gven A, denoed by GEN(A). In our expermens, GEN(A) consss of op n word srngs convered from A usng a baselne IME sysem ha uses only a word rgram model. We assume a se of D+1 feaures f d(w), for d = 0 D. The feaures could be arbrary funcons ha map W o real values. Usng vecor noaon, we have f(w) R D+1, where f(w) = [f 0(W), f 1(W),, f D(W)] T. f 0(W) s called he base feaure, and s defned n our case as he log probably ha he word rgram model assgns o W. Oher feaures (f d(w), for d = 1 D) are defned as he couns of word n-grams (n = 1 and 2 n our expermens) n W. Fnally, he parameers of he model form a vecor of D+1 dmensons, each for one feaure funcon, λ = [λ 0, λ 1,, λ D]. The score of a word srng W can be wren as Score( W, λ ) = λf( W ) = D d= 0 λ f ( W ). (2) The decson rule of Equaon (1) s rewren as d d 1 Se λ 0 = argmn λ0exploss(λ); and λ d = 0 for d=1 D 2 Selec a feaure f k* whch has larges esmaed mpac on reducng ExpLoss of Eq. (6) 3 Updae λ k* λ k* + δ*, and reurn o Sep 2 Fgure 1: The boosng algorhm * W ( A, λ) = arg max Score( W, λ). (3) W GEN(A) Equaon (3) vews IME as a rankng problem, where he model gves he rankng score, no probables. We herefore do no evaluae he model va perplexy. Now, assume ha we can measure he number of converson errors n W by comparng wh a reference ranscrp W R usng an error funcon Er(W R,W), whch s he srng ed dsance funcon n our case. We call he sum of error couns over he ranng samples sample rsk. Our goal hen s o search for he bes parameer se λ whch mnmzes he sample rs as n Equaon (4): λ def MSR = arg mn λ = 1... M Er( W R, W ( A, λ)). (4) However, (4) canno be opmzed easly snce Er(.) s a pecewse consan (or sep) funcon of λ and s graden s undefned. Therefore, dscrmnave mehods apply dfferen approaches ha opmze approxmaely. The boosng algorhm descrbed below s one of such approaches. 3 Boosng Ths secon gves a bref revew of he boosng algorhm, followng he descrpon of some recen work (e.g., Schapre and Snger 1999; Collns and Koo 2005). The boosng algorhm uses an exponenal loss funcon (ExpLoss) o approxmae he sample rsk n Equaon (4). We defne he margn of he par (W R, W) wh respec o he model λ as R R M( W, W ) = Score( W, λ) Score( W, λ) (5) Then, ExpLoss s defned as R ExpLoss( λ ) = exp( M( W, W )) (6) = 1... M W GEN( A ) Noce ha ExpLoss s convex so here s no problem wh local mnma when opmzng. I s shown n Freund e al. (1998) and Collns and Koo (2005) ha here exs graden search procedures ha converge o he rgh soluon. Fgure 1 summarzes he boosng algorhm we used. Afer nalzaon, Seps 2 and 3 are *

3 repeaed N mes; a each eraon, a feaure s chosen and s wegh s updaed as follows. Frs, we defne Upd(λ, δ) as an updaed model, wh he same parameer values as λ wh he excepon of λ whch s ncremened by δ Upd( λ, δ ) = { λ0, λ1,..., λk + δ,..., λd} Then, Seps 2 and 3 n Fgure 1 can be rewren as Equaons (7) and (8), respecvely. ( k*, δ *) = arg mnexploss(upd( λ, δ )) (7) δ δ *) (8) The boosng algorhm can be oo greedy: Each eraon usually reduces he ExpLoss(.) on ranng daa, so for he number of eraons large enough hs loss can be made arbrarly small. However, fng ranng daa oo well evenually leads o overfng, whch degrades he performance on unseen es daa (even hough n boosng overfng can happen very slowly). Shrnkage s a smple approach o dealng wh he overfng problem. I scales he ncremenal sep δ by a small consan ν, ν (0, 1). Thus, he updae of Equaon (8) wh shrnkage s νδ *) (9) Emprcally, has been found ha smaller values of ν lead o smaller numbers of es errors. 4 Lasso Lasso s a regularzaon mehod for esmaon n lnear models (Tbshran 1996). I regularzes or shrnks a fed model hrough an L 1 penaly or consran. Le T(λ) denoe he L 1 penaly of he model,.e., T(λ) = d = 0 D λ d. We hen opmze he model λ so as o mnmze a regularzed loss funcon on ranng daa, called lasso loss defned as LassoLoss( λ, α ) = ExpLoss( λ) +αt( λ) (10) where T(λ) generally penalzes larger models (or complex models), and he parameer α conrols he amoun of regularzaon appled o he esmae. Seng α = 0 reverses he LassoLoss o he unregularzed ExpLoss; as α ncreases, he model coeffcens all shrn each ulmaely becomng zero. In pracce, α should be adapvely chosen o mnmze an esmae of expeced loss, e.g., α decreases wh he ncrease of he number of eraons. Compuaon of he soluon o he lasso problem has been suded for specal loss funcons. For leas square regresson, here s a fas algorhm LARS o fnd he whole lasso pah for dfferen α s (Obsborn e al. 2000a; 2000b; Efron e al. 2004); for 1-norm SVM, can be ransformed no a lnear programmng problem wh a fas algorhm smlar o LARS (Zhu e al. 2003). However, he soluon o he lasso problem for a general convex loss funcon and an adapve α remans open. More mporanly for our purposes, drecly mnmzng lasso funcon of Equaon (10) wh respec o λ s no possble when a very large number of model parameers are employed, as n our ask of LM for IME. Therefore we nvesgae below wo mehods ha closely approxmae he effec of he lasso, and are very smlar o he boosng algorhm. I s also worh nong he dfference beween L 1 and L 2 penaly. The classcal Rdge Regresson seng uses an L 2 penaly n Equaon (10).e., T(λ) = d = 0 D(λ d) 2, whch s much easer o mnmze (for leas square loss bu no for ExpLoss). However, recen research (Donoho e al. 1995) shows ha he L 1 penaly s beer sued for sparse suaons, where here are only a small number of feaures wh nonzero weghs among all canddae feaures. We fnd ha our ask s ndeed a sparse suaon: among 860,000 feaures, n he resulng lnear model only around 5,000 feaures have nonzero weghs. We hen focus on he L 1 penaly. We leave he emprcal comparson of he L 1 and L 2 penaly on he LM ask o fuure work. 4.1 Forward Sagewse Lnear Regresson (FSLR) The frs approxmaon mehod we used s FSLR, descrbed n (Algorhm 10.4, Hase e al. 2001), where Seps 2 and 3 n Fgure 1 are performed accordng o Equaons (7) and (11), respecvely. ( k*, δ *) = arg mnexploss(upd( λ, δ )) (7) δ ε sgn( δ *)) (11) Noce ha FSLR s very smlar o he boosng algorhm wh shrnkage n ha a each sep, he feaure f k* ha has larges esmaed mpac on reducng ExpLoss s seleced. The only dfference s ha FSLR updaes he wegh of f k* by a small fxed sep sze ε. By akng such small seps, FSLR mposes some mplc regularzaon, and can closely approxmae he effec of he lasso n a local sense (Hase e al. 2001). Emprcally, we fnd ha he performance of he boosng algorhm wh shrnkage closely resembles ha of FSLR, wh he learnng rae parameer ν correspondng o ε.

4 4.2 Boosed Lasso (BLasso) The second mehod we used s a modfed verson of he BLasso algorhm descrbed n Zhao and Yu (2004). There are wo major dfferences beween BLasso and FSLR. A each eraon, BLasso can ake eher a forward sep or a backward sep. Smlar o he boosng algorhm and FSLR, a each forward sep, a feaure s seleced and s wegh s updaed accordng o Equaons (12) and (13). ( k*, δ *) = arg mnexploss(upd( λ, δ )) (12) δ =± ε ε sgn( δ *)) (13) However, here s an mporan dfference beween Equaons (12) and (7). In he boosng algorhm wh shrnkage and FSLR, as shown n Equaon (7), a feaure s seleced by s mpac on reducng he loss wh s opmal updae δ *. In conrac, n BLasso, as shown n Equaon (12), he opmzaon over δ s removed, and for each feaure, s loss s calculaed wh an updae of eher +ε or -ε,.e., he grd search s used for feaure selecon. We wll show laer ha hs seemngly rval dfference brngs a sgnfcan mprovemen. The backward sep s unque o BLasso. In each eraon, a feaure s seleced and s wegh s updaed backward f and only f leads o a decrease of he lasso loss, as shown n Equaons (14) and (15): k* = arg mnexploss(upd( λ, sgn( λ ) ε ) (14) k, λk 0 sgn( λ k* ) ε ) (15) f LassoLoss( λ, α ) LassoLoss( λ, α ) > θ where θ s a olerance parameer. Fgure 2 summarzes he BLasso algorhm we used. Afer nalzaon, Seps 4 and 5 are repeaed N mes; a each eraon, a feaure s chosen and s wegh s updaed eher backward or forward by a fxed amoun ε. Noce ha he value of α s adapvely chosen accordng o he reducon of ExpLoss durng ranng. The algorhm sars wh a large nal α, and hen a each forward sep he value of α decreases unl he ExpLoss sops decreasng. Ths s nuvely desrable: I s expeced ha mos hghly effecve feaures are seleced n early sages of ranng, so he reducon of ExpLoss a each sep n early sages are more subsanal han n laer sages. These early seps concde wh he boosng seps mos of he me. In oher words, he effec of backward seps s more vsble a laer sages. k Our mplemenaon of BLasso dffers slghly from he orgnal algorhm descrbed n Zhao and Yu (2004). Frsly, because he value of he base feaure f 0 s he log probably (assgned by a word rgram model) and has a dfferen range from ha of oher feaures as n Equaon (2), λ 0 s se o opmze ExpLoss n he nalzaon sep (Sep 1 n Fgure 2) and remans fxed durng ranng. As suggesed by Collns and Koo (2005), hs ensures ha he conrbuon of he log-lkelhood feaure f 0 s well-calbraed wh respec o ExpLoss. Secondly, when updang a feaure wegh, f he sze of he opmal updae sep (compued va Equaon (7)) s smaller han ε, we use he opmal sep o updae he feaure. Therefore, n our mplemenaon BLasso does no always ake a fxed sep; may ake seps whose sze s smaller han ε. In our nal expermens we found ha boh changes (also used n our mplemenaons of boosng and FSLR) were crucal o he performance of he mehods. 1 Inalze λ 0 : se λ 0 = argmn λ0exploss(λ), and λ d = 0 for d=1 D. 2 Take a forward sep accordng o Eq. (12) and (13), and he updaed model s denoed by λ 1 3 Inalze α = (ExpLoss(λ 0 )-ExpLoss(λ 1 ))/ε 4 Take a backward sep f and only f leads o a decrease of LassoLoss accordng o Eq. (14) and (15), where θ = 0; oherwse 5 Take a forward sep accordng o Eq. (12) and (13); updae α = mn(α, (ExpLoss(λ -1 )-ExpLoss(λ ))/ε ); and reurn o Sep 4. Fgure 2: The BLasso algorhm (Zhao and Yu 2004) provdes heorecal jusfcaons for BLasso. I has been proved ha (1) guaranees ha s safe for BLasso o sar wh an nal α whch s he larges α ha would allow an ε sep away from 0 (.e., larger α s correspond o T(λ)=0); (2) for each value of α, BLasso performs coordnae descen (.e., reduces ExpLoss by updang he wegh of a feaure) unl here s no descen sep; and (3) for each sep where he value of α decreases, guaranees ha he lasso loss s reduced. As a resul, can be proved ha for a fne number of feaures and θ = 0, he BLasso algorhm shown n Fgure 2 converges o he lasso soluon when ε 0. 5 Evaluaon 5.1 Sengs We evaluaed he ranng mehods descrbed above n he so-called cross-doman language model adapaon paradgm, where we adap a model raned on one doman (whch we call he

5 background doman) o a dfferen doman (adapaon doman), for whch only a small amoun of ranng daa s avalable. The daa ses we used n our expermens came from fve dsnc sources of ex. A 36-mllon-word Nkke Newspaper corpus was used as he background doman, on whch he word rgram model was raned. We used four adapaon domans: Yomur (newspaper corpus), TuneUp (balanced corpus conanng newspapers and oher sources of ex), Encara (encyclopeda) and Shncho (collecon of novels). All corpora have been pre-word-segmened usng a lexcon conanng 167,107 enres. For each of he four domans, we creaed ranng daa conssng of 72K senences (0.9M~1.7M words) and es daa of 5K senences (65K~120K words) from each adapaon doman. The frs 800 and 8,000 senences of each adapaon ranng daa were also used o show how dfferen szes of ranng daa affeced he performances of varous adapaon mehods. Anoher 5K-senence subse was used as held-ou daa for each doman. We creaed he ranng samples for dscrmnave learnng as follows. For each phonec srng A n adapaon ranng daa, we produced a lace of canddae word srngs W usng he baselne sysem descrbed n (Gao e al. 2002), whch uses a word rgram model raned va MLE on he Nkke Newspaper corpus. For effcency, we kep only he bes 20 hypoheses n s canddae converson se GEN(A) for each ranng sample for dscrmnave ranng. The oracle bes hypohess, whch gves he mnmum number of errors, was used as he reference ranscrp of A. We used ungrams and bgrams ha occurred more han once n he ranng se as feaures n he lnear model of Equaon (2). The oal number of canddae feaures we used was around 860, Man Resuls Table 1 summarzes he resuls of varous model ranng (adapaon) mehods n erms of CER (%) and CER reducon (n parenheses) over comparng models. In he frs column, he numbers n parenheses nex o he doman name ndcaes he number of ranng senences used for adapaon. Baselne, wh resuls shown n Column 3, s he word rgram model. As expeced, he CER correlaes very well he smlary beween he background doman and he adapaon doman, where doman smlary s measured n erms of cross enropy (Yuan e al. 2005) as shown n Column 2. MAP (maxmum a poseror), wh resuls shown n Column 4, s a radonal LM adapaon mehod where he parameers of he background model are adjused n such a way ha maxmzes he lkelhood of he adapaon daa. Our mplemenaon akes he form of lnear nerpolaon as descrbed n Bacchan e al. (2004): P(w h) = λp b(w h) + (1-λ)P a(w h), where P b s he probably of he background model, P a s he probably raned on adapaon daa usng MLE and he hsory h corresponds o wo precedng words (.e. P b and P a are rgram probables). λ s he nerpolaon wegh opmzed on held-ou daa. Boosng, wh resuls shown n Column 5, s he algorhm descrbed n Fgure 1. In our mplemenaon, we use he shrnkage mehod suggesed by Schapre and Snger (1999) and Collns and Koo (2005). A each eraon, we used he followng updae for he kh feaure + 1 C k + εz (16) δ k = log _ 2 C + εz where C k + s a value ncreasng exponenally wh he sum of margns of (W R, W) pars over he se where f k s seen n W R bu no n W; C k - s he value relaed o he sum of margns over he se where f k s seen n W bu no n W R. ε s a smoohng facor (whose value s opmzed on held-ou daa) and Z s a normalzaon consan (whose value s he ExpLoss(.) of ranng daa accordng o he curren model). We see ha εz n Equaon (16) plays he same role as ν n Equaon (9). BLasso, wh resuls shown n Column 6, s he algorhm descrbed n Fgure 2. We fnd ha he performance of BLasso s no very sensve o he selecon of he sep sze ε across ranng ses of dfferen domans and szes. Alhough small ε s preferred n heory as dscussed earler, would lead o a very slow convergence. Therefore, n our expermens, we always use a large sep (ε = 0.5) and use he so-called early soppng sraegy,.e., he number of eraons before soppng s opmzed on held-ou daa. In he ask of LM for IME, here are mllons of feaures and ranng samples, formng an exremely large and sparse marx. We herefore appled he echnques descrbed n Collns and Koo (2005) o speed up he ranng procedure. The resulng algorhms run n around 15 and 30 mnues respecvely for Boosng and BLasso o converge on an XEON MP 1.90GHz machne when ranng on an 8K-sennece ranng se. k

6 The resuls n Table 1 gve rse o several observaons. Frs of all, boh dscrmnave ranng mehods (.e., Boosng and BLasso) ouperform MAP subsanally. The mprovemen margns are larger when he background and adapaon domans are more smlar. The phenomenon s arbued o he underlyng dfference beween he wo adapaon mehods: MAP ams o mprove he lkelhood of a dsrbuon, so f he adapaon doman s very smlar o he background doman, he dfference beween he wo underlyng dsrbuons s so small ha MAP canno adjus he model effecvely. Dscrmnave mehods, on he oher hand, do no have hs lmaon for hey am o reduce errors drecly. Secondly, BLasso ouperforms Boosng sgnfcanly (p-value < 0.01) on all es ses. The mprovemen margns vary wh he ranng ses of dfferen domans and szes. In general, n cases where he adapaon doman s less smlar o he background doman and larger ranng se s used, he mprovemen of BLasso s more vsble. Noe ha he CER resuls of FSLR are no ncluded n Table 1 because acheves very smlar resuls o he boosng algorhm wh shrnkage f he conrollng parameers of boh algorhms are opmzed va cross-valdaon. We shall dscuss her dfference n he nex secon. 5.3 Dcusson Ths secon nvesgaes wha componens of BLasso brng he mprovemen over Boosng. Comparng he algorhms n Fgures 1 and 2, we noce hree dfferences beween BLasso and Boosng: () he use of backward seps n BLasso; () BLasso uses he grd search (fxed sep sze) for feaure selecon n Equaon (12) whle Boosng uses he connuous search (opmal sep sze) n Equaon (7); and () BLasso uses a fxed sep sze for feaure updae n Equaon (13) whle Boosng uses an opmal sep sze n Equaon (8). We hen nvesgae hese dfferences n urn. To sudy he mpac of backward seps, we compared BLasso wh he boosng algorhm wh a fxed sep search and a fxed sep updae, henceforh referred o as F-Boosng. F-Boosng was mplemened as Fgure 2, by seng a large value o θ n Equaon (15),.e., θ = 10 3, o prohb backward seps. We fnd ha alhough he ranng error curves of BLasso and F-Boosng are almos dencal, he T(λ) curves grow apar wh eraons, as shown n Fgure 3. The resuls show ha wh backward seps, BLasso acheves a beer approxmaon o he rue lasso soluon: I leads o a model wh smlar ranng errors bu less complex (n erms of L 1 penaly). In our expermens we fnd ha he benef of usng backward seps s only vsble n laer eraons when BLasso s backward seps kck n. A ypcal example s shown n Fgure 4. The early seps f o hghly effecve feaures and n hese seps BLasso and F-Boosng agree. For laer seps, fne-unng of feaures s requred. BLasso wh backward seps provdes a beer mechansm han F-Boosng o revse he prevously chosen feaures o accommodae hs fne level of unng. Consequenly we observe he superor performance of BLasso a laer sages as shown n our expermens. As well-known n lnear regresson models, when here are many srongly correlaed feaures, model parameers can be poorly esmaed and exhb hgh varance. By mposng a model sze consran, as n lasso, hs phenomenon s allevaed. Therefore, we speculae ha a beer approxmaon o lasso, as BLasso wh backward seps, would be superor n elmnang he negave effec of srongly correlaed feaures n model esmaon. To verfy our speculaon, we performed he followng expermens. For each ranng se, n addon o word ungram and bgram feaures, we nroduced a new ype of feaures called headword bgram. As descrbed n Gao e al. (2002), headwords are defned as he conen words of he senence. Therefore, headword bgrams consue a specal ype of skppng bgrams whch can capure dependency beween wo words ha may no be adjacen. In realy, a large poron of headword bgrams are dencal o word bgrams, as wo headwords can occur nex o each oher n ex. In he adapaon es daa we used, we fnd ha headword bgram feaures are for he mos par eher compleely overlappng wh he word bgram feaures (.e., all nsances of headword bgrams also coun as word bgrams) or no overlappng a all (.e., a headword bgram feaure s no observed as a word bgram feaure) less han 20% of headword bgram feaures dsplayed a varable degree of overlap wh word bgram feaures. In our daa, he rae of compleely overlappng feaures s 25% o 47% dependng on he adapaon doman. From hs, we can say ha he headword bgram feaures show moderae o hgh degree of correlaon wh he word bgram feaures. We hen used BLasso and F-Boosng o ran he lnear language models ncludng boh word bgram and headword bgram feaures. We fnd ha alhough he CER reducon by addng

7 headword feaures s overall very small, he dfference beween he wo versons of BLasso s more vsble n all four es ses. Comparng Fgures 5 8 wh Fgure 4, can be seen ha BLasso wh backward seps ouperforms he one whou backward seps n much earler sages of ranng wh a larger margn. For example, on Encara daa ses, BLasso ouperforms F-Boosng afer around 18,000 eraons wh headword feaures (Fgure 7), as opposed o 25,000 eraons whou headword feaures (Fgure 4). The resuls seem o corroborae our speculaon ha BLasso s more robus n he presence of hghly correlaed feaures. To nvesgae he mpac of usng he grd search (fxed sep sze) versus he connuous search (opmal sep sze) for feaure selecon, we compared F-Boosng wh FSLR snce hey dffers only n her search mehods for feaure selecon. As shown n Fgures 5 o 8, alhough FSLR s robus n ha s es errors do no ncrease afer many eraons, F-Boosng can reach a much lower error rae on hree ou of four es ses. Therefore, n he ask of LM for IME where CER s he mos mporan merc, he grd search for feaure selecon s more desrable. To nvesgae he mpac of usng a fxed versus an opmal sep sze for feaure updae, we compared FSLR wh Boosng. Alhough boh algorhms acheve very smlar CER resuls, he performance of FSLR s much less sensve o he seleced fxed sep sze. For example, we can selec any value from 0.2 o 0.8, and n mos sengs FSLR acheves he very smlar lowes CER afer 20,000 eraons, and wll say here for many eraons. In conras, n Boosng, he opmal value of ε n Equaon (16) vares wh he szes and domans of ranng daa, and has o be uned carefully. We hus conclude ha n our ask FSLR s more robus agans dfferen ranng sengs and a fxed sep sze for feaure updae s more preferred. 6 Concluson Ths paper nvesgaes wo approxmaon lasso mehods for LMappled o a realsc ask wh a very large number of feaures wh sparse feaure space. Our resuls on Japanese ex npu are promsng. BLasso ouperforms he boosng algorhm sgnfcanly n erms of CER reducon on all expermenal sengs. We have shown ha hs superor performance s a consequence of BLasso s backward sep and s fxed sep sze n boh feaure selecon and feaure wegh updae. Our expermenal resuls n Secon 5 show ha he use of backward sep s val for model fne-unng afer major feaures are seleced and for copng wh srongly correlaed feaures; he fxed sep sze of BLasso s responsble for he mprovemen of CER and he robusness of he resuls. Expermens on oher daa ses and heorecal analyss are needed o furher suppor our fndngs n hs paper. References Bacchan, M., Roar B., and Saraclar, M Language model adapaon wh MAP esmaon and he percepron algorhm. In HLT-NAACL Collns, Mchael and Terry Koo Dscrmnave rerankng for naural language parsng. Compuaonal Lnguscs 31(1): Duda, Rchard O, Har, Peer E. and Sor Davd G Paern classfcaon. John Wley & Sons, Inc. Donoho, D., I. Johnsone, G. Kerkyacharan, and D. Pcard Wavele shrnkage; asympopa? (wh dscusson), J. Royal. Sas. Soc. 57: Efron, B., T. Hase, I. Johnsone, and R. Tbshran Leas angle regresson. Ann. Sas. 32, Freund, Y, R. Iyer, R. E. Schapre, and Y. Snger An effcen boosng algorhm for combnng preferences. In ICML 98. Hase, T., R. Tbshran and J. Fredman The elemens of sascal learnng. Sprnger-Verlag, New York. Gao, Janfeng, Hsam Suzuk and Yang Wen Explong headword dependency and predcve cluserng for language modelng. In EMNLP Gao. J., Yu, H., Yuan, W., and Xu, P Mnmum sample rsk mehods for language modelng. In HLT/EMNLP Osborne, M.R. and Presnell, B. and Turlach B.A. 2000a. A new approach o varable selecon n leas squares problems. Journal of Numercal Analyss, 20(3). Osborne, M.R. and Presnell, B. and Turlach B.A. 2000b. On he lasso and s dual. Journal of Compuaonal and Graphcal Sascs, 9(2): Roar Bran, Mura Saraclar and Mchael Collns Correcve language modelng for large vocabulary ASR wh he percepron algorhm. In ICASSP Schapre, Rober E. and Yoram Snger Improved boosng algorhms usng confdence-raed predcons. Machne Learnng, 37(3): Suzuk, Hsam and Janfeng Gao A comparave sudy on language model adapaon usng new evaluaon mercs. In HLT/EMNLP Tbshran, R Regresson shrnkage and selecon va he lasso. J. R. Sas. Soc. B, 58(1): Yuan, W., J. Gao and H. Suzuk An Emprcal Sudy on Language Model Adapaon Usng a Merc of Doman Smlary. In IJCNLP 05. Zhao, P. and B. Yu Boosed lasso. Tech Repor, Sascs Deparmen, U. C. Berkeley. Zhu, J. S. Rosse, T. Hase, and R. Tbshran norm suppor vecor machnes. NIPS 16. MIT Press.

8 Table 1. CER (%) and CER reducon (%) (Y=Yomur; T=TuneUp; E=Encara; S=-Shncho) Doman Enropy vs.nkke Baselne MAP (over Baselne) Boosng (over MAP) BLasso (over MAP/Boosng) Y (800) (+0.00) 3.13 (+15.41) 3.01 (+18.65/+3.83) Y (8K) (+0.27) 2.88 (+21.95) 2.85 (+22.76/+1.04) Y (72K) (+0.27) 2.78 (+24.66) 2.73 (+26.02/+1.80) T (800) (+0.00) 5.69 (+2.07) 5.63 (+3.10/+1.05) T (8K) (+1.89) 5.48 (+5.48) 5.33 (+6.49/+2.74) T (72K) (+5.85) 5.33 (+2.56) 5.05 (+7.68/+5.25) E (800) (+6.25) 9.82 (-2.29) 9.18 (+4.38/+6.52) E (8K) (+15.63) 8.54 (+1.16) 8.04 (+6.94/+5.85) E (72K) (+22.07) 7.53 (+5.64) 7.20 (+9.77/+4.38) S (800) (+2.63) (-0.42) (+0.59/+1.01) S (8K) (+8.46) (+0.54) (+3.77/+3.25) S (72K) (+11.66) (+4.74) 9.64 (+10.41/+5.95) Fgure 3. L 1 curves: models are raned on he E(8K) daase. Fgure 4. Tes error curves: models are raned on he E(8K) daase. Fgure 5. Tes error curves: models are raned on he Y(8K) daase, ncludng headword bgram feaures. Fgure 6. Tes error curves: models are raned on he T(8K) daase, ncludng headword bgram feaures. Fgure 7. Tes error curves: models are raned on he E(8K) daase, ncludng headword bgram feaures. Fgure 8. Tes error curves: models are raned on he S(8K) daase, ncludng headword bgram feaures.

Variants of Pegasos. December 11, 2009

Variants of Pegasos. December 11, 2009 Inroducon Varans of Pegasos SooWoong Ryu bshboy@sanford.edu December, 009 Youngsoo Cho yc344@sanford.edu Developng a new SVM algorhm s ongong research opc. Among many exng SVM algorhms, we wll focus on

More information

An introduction to Support Vector Machine

An introduction to Support Vector Machine An nroducon o Suppor Vecor Machne 報告者 : 黃立德 References: Smon Haykn, "Neural Neworks: a comprehensve foundaon, second edon, 999, Chaper 2,6 Nello Chrsann, John Shawe-Tayer, An Inroducon o Suppor Vecor Machnes,

More information

Introduction to Boosting

Introduction to Boosting Inroducon o Boosng Cynha Rudn PACM, Prnceon Unversy Advsors Ingrd Daubeches and Rober Schapre Say you have a daabase of news arcles, +, +, -, -, +, +, -, -, +, +, -, -, +, +, -, + where arcles are labeled

More information

Solution in semi infinite diffusion couples (error function analysis)

Solution in semi infinite diffusion couples (error function analysis) Soluon n sem nfne dffuson couples (error funcon analyss) Le us consder now he sem nfne dffuson couple of wo blocks wh concenraon of and I means ha, n a A- bnary sysem, s bondng beween wo blocks made of

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machne Learnng & Percepon Insrucor: Tony Jebara SVM Feaure & Kernel Selecon SVM Eensons Feaure Selecon (Flerng and Wrappng) SVM Feaure Selecon SVM Kernel Selecon SVM Eensons Classfcaon Feaure/Kernel

More information

Robust and Accurate Cancer Classification with Gene Expression Profiling

Robust and Accurate Cancer Classification with Gene Expression Profiling Robus and Accurae Cancer Classfcaon wh Gene Expresson Proflng (Compuaonal ysems Bology, 2005) Auhor: Hafeng L, Keshu Zhang, ao Jang Oulne Background LDA (lnear dscrmnan analyss) and small sample sze problem

More information

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model

Outline. Probabilistic Model Learning. Probabilistic Model Learning. Probabilistic Model for Time-series Data: Hidden Markov Model Probablsc Model for Tme-seres Daa: Hdden Markov Model Hrosh Mamsuka Bonformacs Cener Kyoo Unversy Oulne Three Problems for probablsc models n machne learnng. Compung lkelhood 2. Learnng 3. Parsng (predcon

More information

Machine Learning Linear Regression

Machine Learning Linear Regression Machne Learnng Lnear Regresson Lesson 3 Lnear Regresson Bascs of Regresson Leas Squares esmaon Polynomal Regresson Bass funcons Regresson model Regularzed Regresson Sascal Regresson Mamum Lkelhood (ML)

More information

Clustering (Bishop ch 9)

Clustering (Bishop ch 9) Cluserng (Bshop ch 9) Reference: Daa Mnng by Margare Dunham (a slde source) 1 Cluserng Cluserng s unsupervsed learnng, here are no class labels Wan o fnd groups of smlar nsances Ofen use a dsance measure

More information

CHAPTER 10: LINEAR DISCRIMINATION

CHAPTER 10: LINEAR DISCRIMINATION CHAPER : LINEAR DISCRIMINAION Dscrmnan-based Classfcaon 3 In classfcaon h K classes (C,C,, C k ) We defned dscrmnan funcon g j (), j=,,,k hen gven an es eample, e chose (predced) s class label as C f g

More information

Lecture 11 SVM cont

Lecture 11 SVM cont Lecure SVM con. 0 008 Wha we have done so far We have esalshed ha we wan o fnd a lnear decson oundary whose margn s he larges We know how o measure he margn of a lnear decson oundary Tha s: he mnmum geomerc

More information

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!") i+1,q - [(!

In the complete model, these slopes are ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL. (! i+1 -! i ) + [(!) i+1,q - [(! ANALYSIS OF VARIANCE FOR THE COMPLETE TWO-WAY MODEL The frs hng o es n wo-way ANOVA: Is here neracon? "No neracon" means: The man effecs model would f. Ths n urn means: In he neracon plo (wh A on he horzonal

More information

Lecture 6: Learning for Control (Generalised Linear Regression)

Lecture 6: Learning for Control (Generalised Linear Regression) Lecure 6: Learnng for Conrol (Generalsed Lnear Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure 6: RLSC - Prof. Sehu Vjayakumar Lnear Regresson

More information

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS

V.Abramov - FURTHER ANALYSIS OF CONFIDENCE INTERVALS FOR LARGE CLIENT/SERVER COMPUTER NETWORKS R&RATA # Vol.) 8, March FURTHER AALYSIS OF COFIDECE ITERVALS FOR LARGE CLIET/SERVER COMPUTER ETWORKS Vyacheslav Abramov School of Mahemacal Scences, Monash Unversy, Buldng 8, Level 4, Clayon Campus, Wellngon

More information

( ) () we define the interaction representation by the unitary transformation () = ()

( ) () we define the interaction representation by the unitary transformation () = () Hgher Order Perurbaon Theory Mchael Fowler 3/7/6 The neracon Represenaon Recall ha n he frs par of hs course sequence, we dscussed he chrödnger and Hesenberg represenaons of quanum mechancs here n he chrödnger

More information

Volatility Interpolation

Volatility Interpolation Volaly Inerpolaon Prelmnary Verson March 00 Jesper Andreasen and Bran Huge Danse Mares, Copenhagen wan.daddy@danseban.com brno@danseban.com Elecronc copy avalable a: hp://ssrn.com/absrac=69497 Inro Local

More information

TSS = SST + SSE An orthogonal partition of the total SS

TSS = SST + SSE An orthogonal partition of the total SS ANOVA: Topc 4. Orhogonal conrass [ST&D p. 183] H 0 : µ 1 = µ =... = µ H 1 : The mean of a leas one reamen group s dfferen To es hs hypohess, a basc ANOVA allocaes he varaon among reamen means (SST) equally

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 4 CS434a/54a: Paern Recognon Prof. Olga Veksler Lecure 4 Oulne Normal Random Varable Properes Dscrmnan funcons Why Normal Random Varables? Analycally racable Works well when observaon comes form a corruped

More information

On One Analytic Method of. Constructing Program Controls

On One Analytic Method of. Constructing Program Controls Appled Mahemacal Scences, Vol. 9, 05, no. 8, 409-407 HIKARI Ld, www.m-hkar.com hp://dx.do.org/0.988/ams.05.54349 On One Analyc Mehod of Consrucng Program Conrols A. N. Kvko, S. V. Chsyakov and Yu. E. Balyna

More information

Lecture VI Regression

Lecture VI Regression Lecure VI Regresson (Lnear Mehods for Regresson) Conens: Lnear Mehods for Regresson Leas Squares, Gauss Markov heorem Recursve Leas Squares Lecure VI: MLSC - Dr. Sehu Vjayakumar Lnear Regresson Model M

More information

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms

Introduction ( Week 1-2) Course introduction A brief introduction to molecular biology A brief introduction to sequence comparison Part I: Algorithms Course organzaon Inroducon Wee -2) Course nroducon A bref nroducon o molecular bology A bref nroducon o sequence comparson Par I: Algorhms for Sequence Analyss Wee 3-8) Chaper -3, Models and heores» Probably

More information

Machine Learning 2nd Edition

Machine Learning 2nd Edition INTRODUCTION TO Lecure Sldes for Machne Learnng nd Edon ETHEM ALPAYDIN, modfed by Leonardo Bobadlla and some pars from hp://www.cs.au.ac.l/~aparzn/machnelearnng/ The MIT Press, 00 alpaydn@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/mle

More information

Fall 2010 Graduate Course on Dynamic Learning

Fall 2010 Graduate Course on Dynamic Learning Fall 200 Graduae Course on Dynamc Learnng Chaper 4: Parcle Flers Sepember 27, 200 Byoung-Tak Zhang School of Compuer Scence and Engneerng & Cognve Scence and Bran Scence Programs Seoul aonal Unversy hp://b.snu.ac.kr/~bzhang/

More information

January Examinations 2012

January Examinations 2012 Page of 5 EC79 January Examnaons No. of Pages: 5 No. of Quesons: 8 Subjec ECONOMICS (POSTGRADUATE) Tle of Paper EC79 QUANTITATIVE METHODS FOR BUSINESS AND FINANCE Tme Allowed Two Hours ( hours) Insrucons

More information

Department of Economics University of Toronto

Department of Economics University of Toronto Deparmen of Economcs Unversy of Torono ECO408F M.A. Economercs Lecure Noes on Heeroskedascy Heeroskedascy o Ths lecure nvolves lookng a modfcaons we need o make o deal wh he regresson model when some of

More information

FTCS Solution to the Heat Equation

FTCS Solution to the Heat Equation FTCS Soluon o he Hea Equaon ME 448/548 Noes Gerald Reckenwald Porland Sae Unversy Deparmen of Mechancal Engneerng gerry@pdxedu ME 448/548: FTCS Soluon o he Hea Equaon Overvew Use he forward fne d erence

More information

Robustness Experiments with Two Variance Components

Robustness Experiments with Two Variance Components Naonal Insue of Sandards and Technology (NIST) Informaon Technology Laboraory (ITL) Sascal Engneerng Dvson (SED) Robusness Expermens wh Two Varance Componens by Ana Ivelsse Avlés avles@ns.gov Conference

More information

( ) [ ] MAP Decision Rule

( ) [ ] MAP Decision Rule Announcemens Bayes Decson Theory wh Normal Dsrbuons HW0 due oday HW o be assgned soon Proec descrpon posed Bomercs CSE 90 Lecure 4 CSE90, Sprng 04 CSE90, Sprng 04 Key Probables 4 ω class label X feaure

More information

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6)

Econ107 Applied Econometrics Topic 5: Specification: Choosing Independent Variables (Studenmund, Chapter 6) Econ7 Appled Economercs Topc 5: Specfcaon: Choosng Independen Varables (Sudenmund, Chaper 6 Specfcaon errors ha we wll deal wh: wrong ndependen varable; wrong funconal form. Ths lecure deals wh wrong ndependen

More information

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005

Dynamic Team Decision Theory. EECS 558 Project Shrutivandana Sharma and David Shuman December 10, 2005 Dynamc Team Decson Theory EECS 558 Proec Shruvandana Sharma and Davd Shuman December 0, 005 Oulne Inroducon o Team Decson Theory Decomposon of he Dynamc Team Decson Problem Equvalence of Sac and Dynamc

More information

Cubic Bezier Homotopy Function for Solving Exponential Equations

Cubic Bezier Homotopy Function for Solving Exponential Equations Penerb Journal of Advanced Research n Compung and Applcaons ISSN (onlne: 46-97 Vol. 4, No.. Pages -8, 6 omoopy Funcon for Solvng Eponenal Equaons S. S. Raml *,,. Mohamad Nor,a, N. S. Saharzan,b and M.

More information

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model

( t) Outline of program: BGC1: Survival and event history analysis Oslo, March-May Recapitulation. The additive regression model BGC1: Survval and even hsory analyss Oslo, March-May 212 Monday May 7h and Tuesday May 8h The addve regresson model Ørnulf Borgan Deparmen of Mahemacs Unversy of Oslo Oulne of program: Recapulaon Counng

More information

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany

John Geweke a and Gianni Amisano b a Departments of Economics and Statistics, University of Iowa, USA b European Central Bank, Frankfurt, Germany Herarchcal Markov Normal Mxure models wh Applcaons o Fnancal Asse Reurns Appendx: Proofs of Theorems and Condonal Poseror Dsrbuons John Geweke a and Gann Amsano b a Deparmens of Economcs and Sascs, Unversy

More information

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment

EEL 6266 Power System Operation and Control. Chapter 5 Unit Commitment EEL 6266 Power Sysem Operaon and Conrol Chaper 5 Un Commmen Dynamc programmng chef advanage over enumeraon schemes s he reducon n he dmensonaly of he problem n a src prory order scheme, here are only N

More information

Online Supplement for Dynamic Multi-Technology. Production-Inventory Problem with Emissions Trading

Online Supplement for Dynamic Multi-Technology. Production-Inventory Problem with Emissions Trading Onlne Supplemen for Dynamc Mul-Technology Producon-Invenory Problem wh Emssons Tradng by We Zhang Zhongsheng Hua Yu Xa and Baofeng Huo Proof of Lemma For any ( qr ) Θ s easy o verfy ha he lnear programmng

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Lnear Response Theory: The connecon beween QFT and expermens 3.1. Basc conceps and deas Q: ow do we measure he conducvy of a meal? A: we frs nroduce a weak elecrc feld E, and hen measure

More information

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy

Approximate Analytic Solution of (2+1) - Dimensional Zakharov-Kuznetsov(Zk) Equations Using Homotopy Arcle Inernaonal Journal of Modern Mahemacal Scences, 4, (): - Inernaonal Journal of Modern Mahemacal Scences Journal homepage: www.modernscenfcpress.com/journals/jmms.aspx ISSN: 66-86X Florda, USA Approxmae

More information

Reactive Methods to Solve the Berth AllocationProblem with Stochastic Arrival and Handling Times

Reactive Methods to Solve the Berth AllocationProblem with Stochastic Arrival and Handling Times Reacve Mehods o Solve he Berh AllocaonProblem wh Sochasc Arrval and Handlng Tmes Nsh Umang* Mchel Berlare* * TRANSP-OR, Ecole Polyechnque Fédérale de Lausanne Frs Workshop on Large Scale Opmzaon November

More information

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study)

New M-Estimator Objective Function. in Simultaneous Equations Model. (A Comparative Study) Inernaonal Mahemacal Forum, Vol. 8, 3, no., 7 - HIKARI Ld, www.m-hkar.com hp://dx.do.org/.988/mf.3.3488 New M-Esmaor Objecve Funcon n Smulaneous Equaons Model (A Comparave Sudy) Ahmed H. Youssef Professor

More information

5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015)

5th International Conference on Advanced Design and Manufacturing Engineering (ICADME 2015) 5h Inernaonal onference on Advanced Desgn and Manufacurng Engneerng (IADME 5 The Falure Rae Expermenal Sudy of Specal N Machne Tool hunshan He, a, *, La Pan,b and Bng Hu 3,c,,3 ollege of Mechancal and

More information

CHAPTER 5: MULTIVARIATE METHODS

CHAPTER 5: MULTIVARIATE METHODS CHAPER 5: MULIVARIAE MEHODS Mulvarae Daa 3 Mulple measuremens (sensors) npus/feaures/arbues: -varae N nsances/observaons/eamples Each row s an eample Each column represens a feaure X a b correspons o he

More information

Lecture 2 L n i e n a e r a M od o e d l e s

Lecture 2 L n i e n a e r a M od o e d l e s Lecure Lnear Models Las lecure You have learned abou ha s machne learnng Supervsed learnng Unsupervsed learnng Renforcemen learnng You have seen an eample learnng problem and he general process ha one

More information

RELATIONSHIP BETWEEN VOLATILITY AND TRADING VOLUME: THE CASE OF HSI STOCK RETURNS DATA

RELATIONSHIP BETWEEN VOLATILITY AND TRADING VOLUME: THE CASE OF HSI STOCK RETURNS DATA RELATIONSHIP BETWEEN VOLATILITY AND TRADING VOLUME: THE CASE OF HSI STOCK RETURNS DATA Mchaela Chocholaá Unversy of Economcs Braslava, Slovaka Inroducon (1) one of he characersc feaures of sock reurns

More information

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim

GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS. Youngwoo Ahn and Kitae Kim Korean J. Mah. 19 (2011), No. 3, pp. 263 272 GENERATING CERTAIN QUINTIC IRREDUCIBLE POLYNOMIALS OVER FINITE FIELDS Youngwoo Ahn and Kae Km Absrac. In he paper [1], an explc correspondence beween ceran

More information

Boosted LMS-based Piecewise Linear Adaptive Filters

Boosted LMS-based Piecewise Linear Adaptive Filters 016 4h European Sgnal Processng Conference EUSIPCO) Boosed LMS-based Pecewse Lnear Adapve Flers Darush Kar and Iman Marvan Deparmen of Elecrcal and Elecroncs Engneerng Blken Unversy, Ankara, Turkey {kar,

More information

Graduate Macroeconomics 2 Problem set 5. - Solutions

Graduate Macroeconomics 2 Problem set 5. - Solutions Graduae Macroeconomcs 2 Problem se. - Soluons Queson 1 To answer hs queson we need he frms frs order condons and he equaon ha deermnes he number of frms n equlbrum. The frms frs order condons are: F K

More information

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data

Appendix H: Rarefaction and extrapolation of Hill numbers for incidence data Anne Chao Ncholas J Goell C seh lzabeh L ander K Ma Rober K Colwell and Aaron M llson 03 Rarefacon and erapolaon wh ll numbers: a framewor for samplng and esmaon n speces dversy sudes cology Monographs

More information

Hidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University

Hidden Markov Models Following a lecture by Andrew W. Moore Carnegie Mellon University Hdden Markov Models Followng a lecure by Andrew W. Moore Carnege Mellon Unversy www.cs.cmu.edu/~awm/uorals A Markov Sysem Has N saes, called s, s 2.. s N s 2 There are dscree meseps, 0,, s s 3 N 3 0 Hdden

More information

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015

Learning Objectives. Self Organization Map. Hamming Distance(1/5) Introduction. Hamming Distance(3/5) Hamming Distance(2/5) 15/04/2015 /4/ Learnng Objecves Self Organzaon Map Learnng whou Exaples. Inroducon. MAXNET 3. Cluserng 4. Feaure Map. Self-organzng Feaure Map 6. Concluson 38 Inroducon. Learnng whou exaples. Daa are npu o he syse

More information

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method

Single-loop System Reliability-Based Design & Topology Optimization (SRBDO/SRBTO): A Matrix-based System Reliability (MSR) Method 10 h US Naonal Congress on Compuaonal Mechancs Columbus, Oho 16-19, 2009 Sngle-loop Sysem Relably-Based Desgn & Topology Opmzaon (SRBDO/SRBTO): A Marx-based Sysem Relably (MSR) Mehod Tam Nguyen, Junho

More information

Math 128b Project. Jude Yuen

Math 128b Project. Jude Yuen Mah 8b Proec Jude Yuen . Inroducon Le { Z } be a sequence of observed ndependen vecor varables. If he elemens of Z have a on normal dsrbuon hen { Z } has a mean vecor Z and a varancecovarance marx z. Geomercally

More information

Tight results for Next Fit and Worst Fit with resource augmentation

Tight results for Next Fit and Worst Fit with resource augmentation Tgh resuls for Nex F and Wors F wh resource augmenaon Joan Boyar Leah Epsen Asaf Levn Asrac I s well known ha he wo smple algorhms for he classc n packng prolem, NF and WF oh have an approxmaon rao of

More information

CS286.2 Lecture 14: Quantum de Finetti Theorems II

CS286.2 Lecture 14: Quantum de Finetti Theorems II CS286.2 Lecure 14: Quanum de Fne Theorems II Scrbe: Mara Okounkova 1 Saemen of he heorem Recall he las saemen of he quanum de Fne heorem from he prevous lecure. Theorem 1 Quanum de Fne). Le ρ Dens C 2

More information

CHAPTER 2: Supervised Learning

CHAPTER 2: Supervised Learning HATER 2: Supervsed Learnng Learnng a lass from Eamples lass of a famly car redcon: Is car a famly car? Knowledge eracon: Wha do people epec from a famly car? Oupu: osve (+) and negave ( ) eamples Inpu

More information

Mechanics Physics 151

Mechanics Physics 151 Mechancs Physcs 5 Lecure 0 Canoncal Transformaons (Chaper 9) Wha We Dd Las Tme Hamlon s Prncple n he Hamlonan formalsm Dervaon was smple δi δ Addonal end-pon consrans pq H( q, p, ) d 0 δ q ( ) δq ( ) δ

More information

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that

THEORETICAL AUTOCORRELATIONS. ) if often denoted by γ. Note that THEORETICAL AUTOCORRELATIONS Cov( y, y ) E( y E( y))( y E( y)) ρ = = Var( y) E( y E( y)) =,, L ρ = and Cov( y, y ) s ofen denoed by whle Var( y ) f ofen denoed by γ. Noe ha γ = γ and ρ = ρ and because

More information

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL

DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL DEEP UNFOLDING FOR MULTICHANNEL SOURCE SEPARATION SUPPLEMENTARY MATERIAL Sco Wsdom, John Hershey 2, Jonahan Le Roux 2, and Shnj Waanabe 2 Deparmen o Elecrcal Engneerng, Unversy o Washngon, Seale, WA, USA

More information

CSCE 478/878 Lecture 5: Artificial Neural Networks and Support Vector Machines. Stephen Scott. Introduction. Outline. Linear Threshold Units

CSCE 478/878 Lecture 5: Artificial Neural Networks and Support Vector Machines. Stephen Scott. Introduction. Outline. Linear Threshold Units (Adaped from Ehem Alpaydn and Tom Mchell) Consder humans: Toal number of neurons Neuron schng me 3 second (vs ) Connecons per neuron 4 5 Scene recognon me second nference seps doesn seem lke enough ) much

More information

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance

Bayes rule for a classification problem INF Discriminant functions for the normal density. Euclidean distance. Mahalanobis distance INF 43 3.. Repeon Anne Solberg (anne@f.uo.no Bayes rule for a classfcaon problem Suppose we have J, =,...J classes. s he class label for a pxel, and x s he observed feaure vecor. We can use Bayes rule

More information

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS

THE PREDICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS THE PREICTION OF COMPETITIVE ENVIRONMENT IN BUSINESS INTROUCTION The wo dmensonal paral dfferenal equaons of second order can be used for he smulaon of compeve envronmen n busness The arcle presens he

More information

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys

Dual Approximate Dynamic Programming for Large Scale Hydro Valleys Dual Approxmae Dynamc Programmng for Large Scale Hydro Valleys Perre Carpener and Jean-Phlppe Chanceler 1 ENSTA ParsTech and ENPC ParsTech CMM Workshop, January 2016 1 Jon work wh J.-C. Alas, suppored

More information

F-Tests and Analysis of Variance (ANOVA) in the Simple Linear Regression Model. 1. Introduction

F-Tests and Analysis of Variance (ANOVA) in the Simple Linear Regression Model. 1. Introduction ECOOMICS 35* -- OTE 9 ECO 35* -- OTE 9 F-Tess and Analyss of Varance (AOVA n he Smple Lnear Regresson Model Inroducon The smple lnear regresson model s gven by he followng populaon regresson equaon, or

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. Ths documen s downloaded from DR-NTU, Nanyang Technologcal Unversy Lbrary, Sngapore. Tle A smplfed verb machng algorhm for word paron n vsual speech processng( Acceped verson ) Auhor(s) Foo, Say We; Yong,

More information

12d Model. Civil and Surveying Software. Drainage Analysis Module Detention/Retention Basins. Owen Thornton BE (Mech), 12d Model Programmer

12d Model. Civil and Surveying Software. Drainage Analysis Module Detention/Retention Basins. Owen Thornton BE (Mech), 12d Model Programmer d Model Cvl and Surveyng Soware Dranage Analyss Module Deenon/Reenon Basns Owen Thornon BE (Mech), d Model Programmer owen.hornon@d.com 4 January 007 Revsed: 04 Aprl 007 9 February 008 (8Cp) Ths documen

More information

WiH Wei He

WiH Wei He Sysem Idenfcaon of onlnear Sae-Space Space Baery odels WH We He wehe@calce.umd.edu Advsor: Dr. Chaochao Chen Deparmen of echancal Engneerng Unversy of aryland, College Par 1 Unversy of aryland Bacground

More information

Existence and Uniqueness Results for Random Impulsive Integro-Differential Equation

Existence and Uniqueness Results for Random Impulsive Integro-Differential Equation Global Journal of Pure and Appled Mahemacs. ISSN 973-768 Volume 4, Number 6 (8), pp. 89-87 Research Inda Publcaons hp://www.rpublcaon.com Exsence and Unqueness Resuls for Random Impulsve Inegro-Dfferenal

More information

Let s treat the problem of the response of a system to an applied external force. Again,

Let s treat the problem of the response of a system to an applied external force. Again, Page 33 QUANTUM LNEAR RESPONSE FUNCTON Le s rea he problem of he response of a sysem o an appled exernal force. Agan, H() H f () A H + V () Exernal agen acng on nernal varable Hamlonan for equlbrum sysem

More information

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5

[ ] 2. [ ]3 + (Δx i + Δx i 1 ) / 2. Δx i-1 Δx i Δx i+1. TPG4160 Reservoir Simulation 2018 Lecture note 3. page 1 of 5 TPG460 Reservor Smulaon 08 page of 5 DISCRETIZATIO OF THE FOW EQUATIOS As we already have seen, fne dfference appromaons of he paral dervaves appearng n he flow equaons may be obaned from Taylor seres

More information

Survival Analysis and Reliability. A Note on the Mean Residual Life Function of a Parallel System

Survival Analysis and Reliability. A Note on the Mean Residual Life Function of a Parallel System Communcaons n Sascs Theory and Mehods, 34: 475 484, 2005 Copyrgh Taylor & Francs, Inc. ISSN: 0361-0926 prn/1532-415x onlne DOI: 10.1081/STA-200047430 Survval Analyss and Relably A Noe on he Mean Resdual

More information

The Finite Element Method for the Analysis of Non-Linear and Dynamic Systems

The Finite Element Method for the Analysis of Non-Linear and Dynamic Systems Swss Federal Insue of Page 1 The Fne Elemen Mehod for he Analyss of Non-Lnear and Dynamc Sysems Prof. Dr. Mchael Havbro Faber Dr. Nebojsa Mojslovc Swss Federal Insue of ETH Zurch, Swzerland Mehod of Fne

More information

Computing Relevance, Similarity: The Vector Space Model

Computing Relevance, Similarity: The Vector Space Model Compung Relevance, Smlary: The Vecor Space Model Based on Larson and Hears s sldes a UC-Bereley hp://.sms.bereley.edu/courses/s0/f00/ aabase Managemen Sysems, R. Ramarshnan ocumen Vecors v ocumens are

More information

. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue.

. The geometric multiplicity is dim[ker( λi. number of linearly independent eigenvectors associated with this eigenvalue. Lnear Algebra Lecure # Noes We connue wh he dscusson of egenvalues, egenvecors, and dagonalzably of marces We wan o know, n parcular wha condons wll assure ha a marx can be dagonalzed and wha he obsrucons

More information

Discrete Markov Process. Introduction. Example: Balls and Urns. Stochastic Automaton. INTRODUCTION TO Machine Learning 3rd Edition

Discrete Markov Process. Introduction. Example: Balls and Urns. Stochastic Automaton. INTRODUCTION TO Machine Learning 3rd Edition EHEM ALPAYDI he MI Press, 04 Lecure Sldes for IRODUCIO O Machne Learnng 3rd Edon alpaydn@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/ml3e Sldes from exboo resource page. Slghly eded and wh addonal examples

More information

2. SPATIALLY LAGGED DEPENDENT VARIABLES

2. SPATIALLY LAGGED DEPENDENT VARIABLES 2. SPATIALLY LAGGED DEPENDENT VARIABLES In hs chaper, we descrbe a sascal model ha ncorporaes spaal dependence explcly by addng a spaally lagged dependen varable y on he rgh-hand sde of he regresson equaon.

More information

The Analysis of the Thickness-predictive Model Based on the SVM Xiu-ming Zhao1,a,Yan Wang2,band Zhimin Bi3,c

The Analysis of the Thickness-predictive Model Based on the SVM Xiu-ming Zhao1,a,Yan Wang2,band Zhimin Bi3,c h Naonal Conference on Elecrcal, Elecroncs and Compuer Engneerng (NCEECE The Analyss of he Thcknesspredcve Model Based on he SVM Xumng Zhao,a,Yan Wang,band Zhmn B,c School of Conrol Scence and Engneerng,

More information

Genetic Algorithm in Parameter Estimation of Nonlinear Dynamic Systems

Genetic Algorithm in Parameter Estimation of Nonlinear Dynamic Systems Genec Algorhm n Parameer Esmaon of Nonlnear Dynamc Sysems E. Paeraks manos@egnaa.ee.auh.gr V. Perds perds@vergna.eng.auh.gr Ah. ehagas kehagas@egnaa.ee.auh.gr hp://skron.conrol.ee.auh.gr/kehagas/ndex.hm

More information

Chapter Lagrangian Interpolation

Chapter Lagrangian Interpolation Chaper 5.4 agrangan Inerpolaon Afer readng hs chaper you should be able o:. dere agrangan mehod of nerpolaon. sole problems usng agrangan mehod of nerpolaon and. use agrangan nerpolans o fnd deraes and

More information

An Effective TCM-KNN Scheme for High-Speed Network Anomaly Detection

An Effective TCM-KNN Scheme for High-Speed Network Anomaly Detection Vol. 24, November,, 200 An Effecve TCM-KNN Scheme for Hgh-Speed Nework Anomaly eecon Yang L Chnese Academy of Scences, Bejng Chna, 00080 lyang@sofware.c.ac.cn Absrac. Nework anomaly deecon has been a ho

More information

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION

UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 2017 EXAMINATION INTERNATIONAL TRADE T. J. KEHOE UNIVERSITAT AUTÒNOMA DE BARCELONA MARCH 27 EXAMINATION Please answer wo of he hree quesons. You can consul class noes, workng papers, and arcles whle you are workng on he

More information

Anomaly Detection. Lecture Notes for Chapter 9. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Anomaly Detection. Lecture Notes for Chapter 9. Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar Anomaly eecon Lecure Noes for Chaper 9 Inroducon o aa Mnng, 2 nd Edon by Tan, Senbach, Karpane, Kumar 2/14/18 Inroducon o aa Mnng, 2nd Edon 1 Anomaly/Ouler eecon Wha are anomales/oulers? The se of daa

More information

A Novel Efficient Stopping Criterion for BICM-ID System

A Novel Efficient Stopping Criterion for BICM-ID System A Novel Effcen Soppng Creron for BICM-ID Sysem Xao Yng, L Janpng Communcaon Unversy of Chna Absrac Ths paper devses a novel effcen soppng creron for b-nerleaved coded modulaon wh erave decodng (BICM-ID)

More information

ABSTRACT KEYWORDS. Bonus-malus systems, frequency component, severity component. 1. INTRODUCTION

ABSTRACT KEYWORDS. Bonus-malus systems, frequency component, severity component. 1. INTRODUCTION EERAIED BU-MAU YTEM ITH A FREQUECY AD A EVERITY CMET A IDIVIDUA BAI I AUTMBIE IURACE* BY RAHIM MAHMUDVAD AD HEI HAAI ABTRACT Frangos and Vronos (2001) proposed an opmal bonus-malus sysems wh a frequency

More information

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes.

J i-1 i. J i i+1. Numerical integration of the diffusion equation (I) Finite difference method. Spatial Discretization. Internal nodes. umercal negraon of he dffuson equaon (I) Fne dfference mehod. Spaal screaon. Inernal nodes. R L V For hermal conducon le s dscree he spaal doman no small fne spans, =,,: Balance of parcles for an nernal

More information

Testing a new idea to solve the P = NP problem with mathematical induction

Testing a new idea to solve the P = NP problem with mathematical induction Tesng a new dea o solve he P = NP problem wh mahemacal nducon Bacground P and NP are wo classes (ses) of languages n Compuer Scence An open problem s wheher P = NP Ths paper ess a new dea o compare he

More information

Lecture 18: The Laplace Transform (See Sections and 14.7 in Boas)

Lecture 18: The Laplace Transform (See Sections and 14.7 in Boas) Lecure 8: The Lalace Transform (See Secons 88- and 47 n Boas) Recall ha our bg-cure goal s he analyss of he dfferenal equaon, ax bx cx F, where we emloy varous exansons for he drvng funcon F deendng on

More information

A Novel Iron Loss Reduction Technique for Distribution Transformers. Based on a Combined Genetic Algorithm - Neural Network Approach

A Novel Iron Loss Reduction Technique for Distribution Transformers. Based on a Combined Genetic Algorithm - Neural Network Approach A Novel Iron Loss Reducon Technque for Dsrbuon Transformers Based on a Combned Genec Algorhm - Neural Nework Approach Palvos S. Georglaks Nkolaos D. Doulams Anasasos D. Doulams Nkos D. Hazargyrou and Sefanos

More information

Chapter 4. Neural Networks Based on Competition

Chapter 4. Neural Networks Based on Competition Chaper 4. Neural Neworks Based on Compeon Compeon s mporan for NN Compeon beween neurons has been observed n bologcal nerve sysems Compeon s mporan n solvng many problems To classfy an npu paern _1 no

More information

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s

Ordinary Differential Equations in Neuroscience with Matlab examples. Aim 1- Gain understanding of how to set up and solve ODE s Ordnary Dfferenal Equaons n Neuroscence wh Malab eamples. Am - Gan undersandng of how o se up and solve ODE s Am Undersand how o se up an solve a smple eample of he Hebb rule n D Our goal a end of class

More information

Fitting a Conditional Linear Gaussian Distribution

Fitting a Conditional Linear Gaussian Distribution Fng a Condonal Lnear Gaussan Dsrbuon Kevn P. Murphy 28 Ocober 1998 Revsed 29 January 2003 1 Inroducon We consder he problem of fndng he maxmum lkelhood ML esmaes of he parameers of a condonal Gaussan varable

More information

. The geometric multiplicity is dim[ker( λi. A )], i.e. the number of linearly independent eigenvectors associated with this eigenvalue.

. The geometric multiplicity is dim[ker( λi. A )], i.e. the number of linearly independent eigenvectors associated with this eigenvalue. Mah E-b Lecure #0 Noes We connue wh he dscusson of egenvalues, egenvecors, and dagonalzably of marces We wan o know, n parcular wha condons wll assure ha a marx can be dagonalzed and wha he obsrucons are

More information

Li An-Ping. Beijing , P.R.China

Li An-Ping. Beijing , P.R.China A New Type of Cpher: DICING_csb L An-Png Bejng 100085, P.R.Chna apl0001@sna.com Absrac: In hs paper, we wll propose a new ype of cpher named DICING_csb, whch s derved from our prevous sream cpher DICING.

More information

Chapter 6: AC Circuits

Chapter 6: AC Circuits Chaper 6: AC Crcus Chaper 6: Oulne Phasors and he AC Seady Sae AC Crcus A sable, lnear crcu operang n he seady sae wh snusodal excaon (.e., snusodal seady sae. Complee response forced response naural response.

More information

(,,, ) (,,, ). In addition, there are three other consumers, -2, -1, and 0. Consumer -2 has the utility function

(,,, ) (,,, ). In addition, there are three other consumers, -2, -1, and 0. Consumer -2 has the utility function MACROECONOMIC THEORY T J KEHOE ECON 87 SPRING 5 PROBLEM SET # Conder an overlappng generaon economy le ha n queon 5 on problem e n whch conumer lve for perod The uly funcon of he conumer born n perod,

More information

Comparison of Supervised & Unsupervised Learning in βs Estimation between Stocks and the S&P500

Comparison of Supervised & Unsupervised Learning in βs Estimation between Stocks and the S&P500 Comparson of Supervsed & Unsupervsed Learnng n βs Esmaon beween Socks and he S&P500 J. We, Y. Hassd, J. Edery, A. Becker, Sanford Unversy T I. INTRODUCTION HE goal of our proec s o analyze he relaonshps

More information

Robustness of DEWMA versus EWMA Control Charts to Non-Normal Processes

Robustness of DEWMA versus EWMA Control Charts to Non-Normal Processes Journal of Modern Appled Sascal Mehods Volume Issue Arcle 8 5--3 Robusness of D versus Conrol Chars o Non- Processes Saad Saeed Alkahan Performance Measuremen Cener of Governmen Agences, Insue of Publc

More information

Time-interval analysis of β decay. V. Horvat and J. C. Hardy

Time-interval analysis of β decay. V. Horvat and J. C. Hardy Tme-nerval analyss of β decay V. Horva and J. C. Hardy Work on he even analyss of β decay [1] connued and resuled n he developmen of a novel mehod of bea-decay me-nerval analyss ha produces hghly accurae

More information

MANY real-world applications (e.g. production

MANY real-world applications (e.g. production Barebones Parcle Swarm for Ineger Programmng Problems Mahamed G. H. Omran, Andres Engelbrech and Ayed Salman Absrac The performance of wo recen varans of Parcle Swarm Opmzaon (PSO) when appled o Ineger

More information

Dynamically Weighted Majority Voting for Incremental Learning and Comparison of Three Boosting Based Approaches

Dynamically Weighted Majority Voting for Incremental Learning and Comparison of Three Boosting Based Approaches Proceedngs of Inernaonal Jon Conference on Neural Neworks, Monreal, Canada, July 3 - Augus 4, 2005 Dynamcally Weghed Majory Vong for Incremenal Learnng and Comparson of Three Boosng Based Approaches Alasgar

More information

ECE 366 Honors Section Fall 2009 Project Description

ECE 366 Honors Section Fall 2009 Project Description ECE 366 Honors Secon Fall 2009 Projec Descrpon Inroducon: Muscal genres are caegorcal labels creaed by humans o characerze dfferen ypes of musc. A muscal genre s characerzed by he common characerscs shared

More information

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD

HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD Journal of Appled Mahemacs and Compuaonal Mechancs 3, (), 45-5 HEAT CONDUCTION PROBLEM IN A TWO-LAYERED HOLLOW CYLINDER BY USING THE GREEN S FUNCTION METHOD Sansław Kukla, Urszula Sedlecka Insue of Mahemacs,

More information