Generalized Likelihood Ratio Tests Based on The Asymptotic Variant of The Minimax Approach

Size: px
Start display at page:

Download "Generalized Likelihood Ratio Tests Based on The Asymptotic Variant of The Minimax Approach"

Transcription

1 Internatonal Journal of Statstcs and Probablty; Vol. 4, No. 3; 015 ISSN E-ISSN Publsed by Canadan Center of Scence and Educaton Generalzed Lkelood Rato Tests Based on Te Asymptotc Varant of Te Mnmax Approac Han Yu 1 1 Department of Matematcs, Computer Scence and Informaton System, Nortwest Mssour State Unversty, USA Correspondence: Han Yu, Department of Matematcs, Computer Scence and Informaton System, Nortwest Mssour State Unversty, USA. E-mal: anyu@nwmssour.edu Receved: Aprl 1, 015 Accepted: May 8, 015 Onlne Publsed: July 1, 015 do: /jsp.v4n3p36 URL: ttp://dx.do.org/ /jsp.v4n3p36 Abstract Maxmum lkelood rato test statstcs may not exst n general n nonparametrc functon estmaton settng. In ts paper a new class of generalzed lkelood rato (GLR) tests s proposed for nonparametrc goodness-of-ft testng va te asymptotc varant of te mnmax approac. Te proposed nonparametrc tests are developed to be asymptotcally dstrbuton-free based on latent varable representatons. Te nonparametrc tests are amelorated to be approprately complex so tat tey are analytcally tractable and numercally feasble. Tey are well applcable for te adaptve study of ypotess testng problems of growng dmensons. To assess te proposed GLR tests, te asymptotc propertes are derved. Te procedure can be vewed as a novel nonparametrc extenson of te classcal parametrc lkelood rato test as a guard aganst possble gross msspecfcaton of te data-generatng mecansm. Smulatons of te proposed mnmax-type GLR tests are nvestgated for te small sample sze performance and sow tat te GLR tests ave appealng small sample sze propertes. Keywords: kernel, latent varable representatons, generalzed lkelood rato, mnmax approac 1. Introducton Parametrc models ave te advantage of easy nterpretaton and effcent computaton over nonparametrc models. However, te parametrc models are often suspected of beng at te rsk of msspecfcaton n specfc real applcatons. A lot of statstcal problems may not be parametrc n practce. For example, te objects of estmaton or testng are speec, mages, and so on. Tey can be treated as unknown nfnte-dmensonal parameters tat belong to some specfc functonal set. As a guard aganst possble msspecfcaton of te data-generatng mecansm, nonparametrc alternatves appear to avod te msspecfcaton problem by makng statstcal models rc enoug to nclude essentally all relevant samplng dstrbutons emergng n real world applcatons. Nonparametrc goodness-of-ft tests deal wt testng te adequacy of model complexty to data by adoptng te nonparametrc alternatves. Nonparametrc goodness-of-ft tests ave a long story n statstcs (D Agostno and Stepens, 1986) and are ncreasngly commonly used n data mnng, pattern recognton, macne learnng and statstcal analyss. Tere are many connectons between nonparametrc ypotess testng and nonparametrc estmaton. However, te nonparametrc ypotess testng teory s essentally dfferent from te estmaton teory due to curse of dmensonalty. Tere are a lot of new effects n nonparametrc ypotess testng teory compared wt parametrc ypotess testng as well as nonparametrc estmaton (Ingster, 00). In nonparametrc goodness-of-ft tests, tere are many dscrepancy based approaces desgned to solve te problems of testng aganst nonparametrc alternatves. Te classcal approaces based on L and L are popular n nonparametrc goodness-of-ft tests. Tey measure te dfference between te estmators under null and alternatve models and are te generalzaton of te Kolmogorov- Smrnov (KS) and Cramér-von Mses (CV) types of statstcs. However, te approaces suffer some drawbacks. Frst of all, te null dstrbuton of te test statstc s unknown and depends crtcally on nusance parameters. Second, te coces of measures and wegts can be arbtrary, wc lmts te applcablty of te dscrepancy based metods (Fan et al., 001). Useful counterpart approaces are te maxmum lkelood rato approaces tat are generally well applcable to most parametrc ypotess testng problems. Te most fundamental property tat sgnfcantly contrbutes to te success of te maxmum lkelood rato approaces s tat ter asymptotc 36

2 dstrbutons under null ypoteses are ndependent of nusance parameters. Te property can determne te null dstrbuton of te lkelood rato statstc va usng eter te asymptotc dstrbuton or te Monte Carlo smulaton by settng nusance parameters at some estmated values. In addton, lkelood-based approaces generally lead to more effcent estmaton of unknown model parameters and allow for te proper assessment of te uncertanty, wc as a practcal mpact. Te tradtonal maxmum lkelood rato tests are not naturally applcable to te problems wt nonparametrc models as alternatves n general. Frst of all, te nonparametrc maxmum lkelood estmate may usually not exst n a densty functon space tat specfes te nonparametrc densty alternatves and tus te nonparametrc maxmum lkelood rato tests are not applcable n general (Baadur, 1958, Le Cam, 1990). Even f te nonparametrc maxmum lkelood estmate exsts for te alternatves, t s awkward to select te same optmal smootng parameter for bot steps n te nonparametrc estmaton for te nonparametrc alternatves and te testng aganst te nonparametrc alternatves. Some lkelood rato test procedures tat are dstrbuton-free under parametrc alternatves may become dependent on nusance parameters under nonparametrc alternatves snce nfnte dmensonal negborood s around a null ypotess. To attenuate tese dffcultes arsng from te nonparametrc alternatves problems due to te curse of dmensonalty, an approac based on te asymptotc varant of te mnmax approac s proposed along te lne of parametrc lkelood rato tests tat possesses dstrbuton-free property. It s a metod tat tests te adequacy of te model complexty to avod overfttng n nonparametrc settng. To am at a unfed prncple for nonparametrc alternatves problems from uncertanty perspectve, te proposed approac replaces te maxmum lkelood estmate under te nonparametrc densty alternatves by tractable least favorable nonparametrc estmates allowng te flexblty of coosng te same optmal tunng parameters. In te next secton, te tests are ntroduced and formulated. Secton 3 presents te man results on asymptotc rate of convergence. Smulaton results on te small sample sze performance of te GLR tests are contaned n Secton 4. All matematcal proofs of man results are collected n an appendx.. Te Test Statstcs Assume X = (X 1, X,, X n ) s an ndependent dentcally dstrbuted (..d) random sample from te probablty densty functon (pdf) f (x) wt cumulatve dstrbuton functon (CDF) F. X (), = 1,,, n are te order statstcs of X. Let X () = X (1) f 0 and X () = X (n) f n + 1 tereafter for notatonal smplcty. Te sample drectly tells us muc about te F va te emprcal cumulatve dstrbuton functon (ECDF) F n (x) = 1 n n I(X x), were I( ) s an ndcator functon. Te quantle functon (qf) Q F 1 of te F defned as Q(u) := nf{x : F(x) u}, 0 < u < 1, s sometmes te object of more drect nterest tan te F tself. Te data X relate drectly to Q smply by takng te left-contnuous nverse of F n, namely te usual emprcal quantle functon (eqf) Q n (u) = n X () I ( 1 n, n ](u), 0 < u < 1. (1) Let F 0 = { f 0 (x, θ), θ Θ 0 } be a famly of probablty densty functons wt CDF F 0 (x, θ) tat are measurable and absolutely contnuous n x for every θ and contnuous n θ for every x, were Θ 0 s an open subset of a d- dmensonal space R d. Te problem of nterest s to test te null ypotess tat te unknown densty f s n F 0,.e. versus te alternatve ypotess H 0 : f = f 0 (x, θ), for some θ Θ 0, () H a : f f 0 (x, θ), for all θ Θ 0. (3) Our generalzed lkelood rato tests are proposed for te ypotess va λ n = l n ( f ) l n ( f 0 ), were l n ( ) s a log-lkelood functonal of a densty functon dependent on sample sze n. A large value of λ n s an evdence aganst te null ypotess H 0 snce te alternatve famly of nonparametrc models s far more lkely to generate te observed data..1 Nonparametrc Lkelood Estmate under Nonparametrc Densty Alternatves On one and, due to a lack of clear knowledge about te data-generatng mecansm, we can only make very general assumptons and leave a large porton of te mecansm unspecfed so tat te dstrbuton f of te data s 37

3 not specfed by a fnte number of parameters. Suc nonparametrc models are qute popular guard aganst possble gross msspecfcaton of te data-generatng mecansm especally wen data can be collected adequately. Furter, latent varables can be ntroduced to allow relatvely more complex log-lkelood of f,.e., l n ( f ) over observed data under te nonparametrc alternatves, to be expressed n terms of more tractable nonparametrc estmates of te l n ( f ) over te expanded space of observed and latent varables. Te ntroducton of latent varables tus allows least favorable log-lkelood l n ( f ) to be constructed from smpler nonparametrc estmates wt te flexblty of coosng te same optmal tunng parameters. Te latent varable representatons for te log-lkelood by te ntroducton of te order statstcs U (1),, U (n) are as follows: l n ( f ) = = = n log f (X () ) n log 1 f (Q(U () )) n log q ( ) U (), (4) were q( ) = ( f Q( )) 1 tat Turkey (1965) called te sparsty functon and Parzen (1979) called te quantle densty functon (qdf). Te quantle densty functon s of muc practcal relevance manly because t appears as part of te asymptotc varance of emprcal quantles. Hstogram-type estmators of te quantle densty functon ave been suggested by Sddqu (1960) and studed by Bloc and Gastwrt (1968), Bofnger (1975), Ress (1978), Seater and Martz (1983), and Falk (1986). Falk (1984) sowed tat te kernel type estmate of q-quantle beats te sample q-quantle under sutable condtons. Te kernel estmate of te quantle functon Q( ) s Q n,n (t, Q n ) := were K n ( ) = 1 n K( n ) wt bandwt n. Ten 1 q n,n (t, Q n ) := d dt Q n,n (t, Q n ) = d dt 0 Q n (u)k n (t u)du, 0 < t < 1, (5) 1 0 Q n (u)k n (t u) du, 0 < t < 1. can be a kernel estmator of q( ). To make te estmator well defned, te kernels K n must satsfy certan dfferentablty condtons. Te condtons detaled n te next secton result n q n,n (t, Q n ) = were te kernel k( ) = K ( ) s te dervatve of K wt k n ( ) = 1 n k( n ). 1 0 Q n (u)k n (t u)du, 0 < t < 1, (6) Te q(u () ) n (4) can be estmated by q n,n (U (), Q n ). Tus l n ( f ) n (4) can be estmated by n log q n,n (U (), Q n ). (7) On te oter and, bot te nonparametrc estmates of te quantle densty functon q n,n (, Q n ) and te log lkelood functon log q(u () ) are evaluated at te same tranng data set. To be a predctve quantty wtout modelng nose components of tranng data, te log-lkelood n (7) s evaluated at te grds of te means EU (), G n ={ n+1, = 1,, n}, n te nterval [0, 1] gven te sample sze n. Te grd G n ten caracterzes te U () s. Te total number of ponts n te grd G n grows lnearly wt te sample sze n. Te statstc (7) tus can be amelorated by settng U () as n+1 to be numercally effcent. Wt enoug data, ts comes arbtrarly close to te true log-lkelood functon. n n+1 Let a n =, x = mn{k, k x, k Z}, te celng of a real number x s te smallest nteger x. Calculatons usng ntegraton by parts sow tat q n,n at t = s a sum of local wegted order statstcs: n+1 q n,n ( n+1, Q n) = w n jx ( j), 38

4 were w n j = K n ( n+1 j 1 n ) K n ( n+1 j n ) and j J = [m, m ] wt m = a n n + 1 and m = a n + n for eac. Note tat w n j s a n+1 -centered local dfference kernel for j. In partcular, f Q n ( ) = U n ( ), te unform emprcal quantle functon of U (), ten q n,n ( n+1 ; U n) = w n ju ( j). It s well known from Wald s Statstcal Decson Teory tat mnmax problems correspond to te Bayesan problems for te least favorable prors. Te proposed generalzed log-lkelood l n ( f ) under nonparametrc alternatves can tus be estmated nonparametrcally wt numercally effcency n te sense of te least favorable prors, n ( l n ( f ) = log q n,n n+1 ; Q ) n n = log w n jx ( j) (8). Parametrc Lkelood Estmate under Parametrc Null Under te null ypotess f = f 0 (x, θ) F 0. θ can be estmated by maxmum lkelood estmator ˆθ n = argmax L n (X, θ), were L n (X, θ) = 1 n n log f 0 (X, θ), we denote θ Θ 0 n ˆl n ( f 0 ) := log f 0 (X (), ˆθ n ) (9).3 Generalzed Lkelood Rato Test Te proposed mnmax-type generalzed lkelood rato (GLR) test statstc T n,n = ˆλ n wt were ˆλ n = l n ( f ) ˆl n ( f 0 ) n n = log q n,n ( n+1 ; Q n) log f 0 (X (), ˆθ n ) = n log q n,n ( n+1 ; U n) =U n, + V n, + W n (ˆθ n ) U n,n := n n log f (X ()) q n,n ( n+1 ; Q n) q n,n ( n+1 ; U + n) log q n,n ( n+1 ; U n) n V n,n := log f (X ()) q n,n ( n+1 ; Q n) q n,n ( n+1 ; U, n) n f (X () ) W n (ˆθ n ) := log f 0 (X (), ˆθ n ). n f (X () ) log f 0 (X (), ˆθ n ) Our asymptotc analyss sows tat te frst term U n,n under te certan condtons detaled n te next secton domnates te asymptotc null dstrbuton of T n,n wle te oter two terms can be asymptotcally neglgble. Snce U n,n s a functon of unform random varables tat do not depend on te underlyng pdf f, te asymptotc null dstrbuton s terefore dstrbuton-free. 3. Asymptotc Results 3.1 Asymptotc Null Dstrbuton Let x F := sup{x : F(x) = 0} and x F := nf{x : F(x) = 1}, x F < x F. We need regularty condtons on te kernel functons, smootng parameters and densty functons for our results. We assume tat 39

5 (K.1) Te K( ) s a pdf, symmetrc about 0; (K.) K < ; (K.3) K( ) s at least tree tmes contnuously dfferentable on te compact support ( 1, 1). (H.1) n 0 and n as n ; (H.) n / log n as n ; (H.3) δ n for some δ > 0 as n and 3 n log n 0 as n. (F.1) pdf f wt CDF F(x) s strctly postve and contnuously dfferentable on (x F, x F ); (F.) A = lm f (x) < and B = lm f (x) < ; Eter A=0 (B=0) or f s nondecreasng (nonncreasng) on an x xf x x F nterval to te rgt of x F (to te left of x F ). (F.3) Tere exsts a constant γ > 0 suc tat sup F(x)(1 F(x)) f (x) γ, x F <x<x f (x) F were f 0 (x) denotes te frst dervatve wt respect to x. It s noteworty tat te nequalty n (F.3) s equvalent to (F.3 ) sup J(u) u(1 u) γ, were J(u) := d log q(u)/du s te score functon. 0<u<1 Note tat te term W n (ˆθ n ) doesn t depend on te smootng parameter n. Under te null ypotess H 0 t s smply te log-lkelood rato statstc. Terefore, ts order of magntude would be O P (1) under te null ypotess H 0. So we ave Lemma 1. Assume kernel functon K(x) satsfes condtons (K.1--3); smootng parameter n satsfes condtons (H.1--3); and f (x) satsfes condtons (F.1--3). Under te null ypotess H 0, we ave n W n (ˆθ n ) = o P (1). Lemma. Assume kernel functon K(x) satsfes condtons (K.1--3); smootng parameter n satsfes condtons (H.1--3); and f (x) satsfes condtons (F.1--3). Under te null ypotess H 0, we ave n V n,n = o P (1). Teorem 1. Assume kernel functon K(x) satsfes condtons (K.1--3); smootng parameter n satsfes condtons (H.1--3); and f (x) satsfes condton (F.1--3). Under te null ypotess H 0, we ave n (T n,n µ n,n (K)) L N(0, σ (K)), as n, were µ n,n (K) = 1 n K + o ( 1 n 3 n ), σ (K) = 0 1 dz( K(x)K(x z)dx). 1+z Remark 1. Te above result sows tat te mean of T n,n (K) and te varance of T n,n (K) ave te same rate of 1 n. Te test statstc s consdered as a χ type-test statstc n te sense. Remark. Te mean of te GLR statstc, wose leadng term s 1 n K(x), s te counterpart of te degrees of freedom n χ. It s equvalent to te dfference of te effectve number of parameters used under te null and alternatve ypoteses. Ts can be explaned as follows. Suppose tat we partton te support of Q(u) nto equspaced ntervals, eac wt lengt n. Ts results n rougly 1 n K(x) ntervals. Snce te local smooter uses overlappng ntervals, te ndependence n χ s not satsfed and terefore te covarances result n te convoluton factor makng te effectve number of parameters beng slgtly dfferent from 1 n K(x). 40

6 Remark 3. µ n,n (K) and σ (K) are ndependent of nusance parameters. One does not ave to teoretcally derve te constant µ n, (K) and σ (K) n order to use te GLR tests n applcatons, snce as long as tere s suc a dstrbuton-free property for te test statstc, one can smply smulate te null dstrbutons by settng nusance parameters under te null ypotess at reasonable values or estmates. 4. Smulaton Study on Bandwdt Selecton and Powers Nonparametrc metods are typcally ndexed by smootng parameter (.e. bandwdt) wc controls te degree of complexty. Te coce of bandwdt s terefore crtcal to mplementaton. In order to compute our test statstc for a gven data set, one needs to specfy te order of smootng parameter. Our asymptotc study suggests tat sould be cosen adaptvely accordng to te sample sze, for example, rangng from te order of n 1 log n to te order of n 3 log 4 3 n by condtons (H.) and (H.3), and would ensure te dstrbuton-free property and consstency of our test as far as te order of s concerned. In addton, te knowledge of dstrbutonal assumpton n null ypotess can be utlzed to estmate te optmal bandwdt opt n n terms of power. Specfcally, to test H 0 : f 0 (x, θ) F 0, we coose ĥ opt n as te estmate of opt n for te gven sample sze tat mnmzes te l n ( f ) ˆl n ( f 0 ) wt respect to under te null ypotess. Ts s equvalent to mnmze l n ( f ) because te observed log-lkelood ˆl n ( f 0 ) does not depend on te bandwdt. But ts doesn t mean tat te observed log-lkelood ˆl n ( f 0 ) does not play any role n selectng,.e., te optmzaton s subject to l n ( f ) ˆl n ( f 0 ). Ts motvates te followng data-drven metod of coosng smootng parameter opt n n terms of log-lkelood: ĥ opt n := arg mn O( log n n )<<o(n 3 log 4 3 n) { l n ( f ) : l n ( f ) ˆl n ( f 0 ) }. In practce a general gude for te coce of for a fxed and fnte n would be valuable snce te dstrbuton of our test statstc s dependent on te coce of. In te smulaton, we used te trwegt kernel to consder te problem of testng te composte ypotess of normalty wen bot te mean and te varance were unspecfed for sample sze n = 0, 30, 40, 50, 60, 80 and 100 at te level α = Tere s no close-form soluton for te bandwdt. Te coce of te bandwdt needs to be done by numercal optmzaton. Te summary statstcs of te bandwdt estmates ĥ opt n by numercal optmzaton are presented n order of ncreasng sample sze n = 0, 30, 40, 50, 60, 80, 100 wt 5000 replcates of null N(0, 1) n Table 1. Te mean and te medan of te estmates of decrease from rougly 0.3 to 0. as sample sze ncreases from 0 to 100. We can see te means of te bandwdt estmates tend to be larger tan te medans of te bandwdt estmates. Ts observaton suggests tat te bandwdt estmates at gven sample sze tend to be rgt skewed. Table 1. Summary of ĥ opt n sample sze Mn. 1st Qu. Medan Mean 3rd Qu. Max. n = n = n = n = n = n = To nvestgate te fnte sample beavor of te proposed test statstc based on sample sze and te assocated estmated bandwdt, we generated 5000 replcate samples n order of ncreasng sze n = 0, 50 and 100 from te standard normal dstrbuton. Bot parametrc lkelood estmates under null and nonparametrc lkelood estmates under alternatve were calculated for eac of tese samples. Te kernel densty estmate of generalzed lkelood rato test statstc Tĥ (sold curve) was plotted wt supermposed normal ft (.e., te normal densty curve wt te sample mean and varance) sown by red dased lne as reference n Fgures 1, and 3 for eac of tese samples. Fgures 1 and llustrate te departure of te GLR test statstc Tĥ from normalty. Tere s a skewness n te drecton of a slgtly eavy rgt tal. Te egt of te peak s ger tan te parametrc ft. For n = 100 te asymptotc normalty olds to a remarkable degree sown by Fgure 3. Fgures 1, and 3 demonstrate tat te generalzed lkelood rato beaves closer and closer to a normal dstrbuton as sample sze n s larger and larger. 41

7 GLRT T Densty GLRT vs Normal GLRT Normal T Fgure 1. Dstrbuton of GLRT T (n=0) GLRT T Densty GLRT vs Normal T Normal T Fgure. Dstrbuton of GLRT T (n=50) GLRT T Densty GLRT vs Normal T Normal T Fgure 3. Dstrbuton of GLRT T (n=100) 4

8 To assess power, we consdered te problem of testng te composte ypotess of normalty wen bot te mean and te varance are unspecfed aganst ten alternatves tat were used n prevous studes of nonparametrc tests for sample sze n = 0, n = 50 and n = 100 at te level α = 0.05 usng te trwegt kernel. Specfcally, tese alternatve dstrbutons are standard exponental denoted by Exp(1), gamma dstrbuton wt sape parameter p = and scale parameter λ = 1 denoted by Gamma(, 1), Unform dstrbuton on (0, 1) denoted by U(0, 1), Beta dstrbuton wt parameters and 1 denoted by Beta(, 1), Beta dstrbuton wt parameters and 6 denoted by Beta(, 6), Laplace dstrbuton wt densty functon gven by f (x, θ) = 1 exp( x µ /ϕ) ϕ were θ := (µ, ϕ) = (0, 0.5) denoted by Laplace(0, 0.5)), log-normal wt densty functon gven by f (x, θ) = 1 xτ π exp( 1 τ (log x ν) ) were θ := (ν, τ) = (, 0.5) denoted by Lognormal(, 0.5). Te last sx alternatves n Table were added to present varous sapes of denstes smlar to a normal densty. To determne crtcal values of T n, we generalzed 5000 replcate samples of sze 0, 50 and 100 respectvely from te standard normal dstrbuton. For eac sample, T n was calculated usng trwegt kernel on te regular grd of rangng n order from.05 to.45 by 0.05 and te correspondng estmated ĥ n. Note tat te l n ( f ) mgt be msleadng for small value of for small sample sze n = 0 and n = 50. Ts arses wen tere s data roundng and s too small to be close to 0. Te (1 α)t quantles of T n were ten estmated. Once tese crtcal values ad been determned, te powers of te test were estmated by smulatons,.e., for eac alternatve and eac n, 5000 samples of sze 0, sze 50 and sze 100 were generated from te correspondng alternatve dstrbuton and te powers were tus estmated. Tese Monte Carlo power estmates are gven n Tables It s not surprsng to see poor performance wt te pretty low powers for some alternatves at bandwdt close to no-smootng ponts. Table. Power Estmates for Varous Coces of and Alternatves (n = 0, replcate= 5000, α = 0.05) Alternatve =.05 =.10 =.15 =.0 =.5 =.30 =.35 =.40 =.45 Exp(1) Gamma(, 1) U(0, 1) Beta(, 1) Beta(, 6) Laplace(0, 0.5) Lognormal(, 0.5) t(3) t(5) Welbull(, 0.5) Table 3. Power Estmates for Varous Coces of and Alternatves (n = 50, replcate=5000, α = 0.05) ĥ n Alternatve =.05 =.10 =.15 =.0 =.5 =.30 =.35 =.40 =.45 =.50 ĥ n Exp(1) Gamma(, 1) U(0, 1) Beta(, 1) Beta(, 6) Laplace(0, 0.5) Lognormal(, 0.5) t(3) t(5) Welbull(, 0.5)

9 Table 4. Power Estmates for Varous Coces of and Alternatves ( n = 100, replcate=5000, α = 0.05 ) Alternatve =.05 =.10 =.15 =.0 =.5 =.30 =.35 =.40 =.45 =.50 Exp(1) Gamma(, 1) U(0, 1) Beta(, 1) Beta(, 6) Laplace(0. 0.5) Lognormal(, 0.5) t(3) t(5) Welbull(, 0.5) Tese power smulatons sow tat for a fxed n, tere does not exst an wc s optmal unformly for all alternatves consdered. Ts makes sense n stuatons wen te tests ave to take nto account departures from te null ypotess over all drectons n nonparametrc densty alternatves, snce alternatves are vague and te coce of te bandwdt s desgned to guard aganst all nonparametrc densty alternatves, and t s natural not to expect tat te cosen ĥ n would beat all oter coces of n terms of power. Our smulaton results are very encouragng. From Tables -3-4, we can see tat te powers for ĥ n are far greater tan or as close as te medan powers for all coces of for sample szes n = 0, 50 and 100. Tese results suggest tat te data-drven metod of coosng s a very promsng procedure to overcome te dependence problem of te power of te test on. Tese results also suggest one possble way of coosng te optmal n stuatons were one as n mnd a pror knowledge on a partcular alternatve beng tested aganst,.e., f a specfc alternatve s of specal nterest ten te best way of coosng would be to coose tat yelds te gest power n te drecton of ts alternatve for te gven sample sze and level α. Furtermore, tese results suggest anoter way to mprove power aganst all nonparametrc densty alternatves or a partcular alternatve n mnd s to ncrease te sample sze by ts own nature n nonparametrc settng. Ts was sown n our smulatons tat all te powers ncreases as sample sze ncreases. To see f t was actually te case, we repeated te smulaton study wt sample sze n ncreased to 00. Te results are presented n Table 5 for =0.01 to 0.50, wc support te fact tat te asymptotcs of te teory presented ere takes some tme to take effect. Table 5. Power Estmates for Varous Coces of and Alternatves ( n = 00, =0.01 to 0.50, replcate=5000, α = 0.05 ) ĥ n Alternatve =.01 =.0 =.03 =0.04 =.05 =.10 =.15 =.0 =.5 =.30 =.35 =.40 =.45 =.50 Exp(1) Gamma(, 1) U(0, 1) Beta(, 1) Beta(, 6) Laplace(0. 0.5) Lognormal(, 0.5) t(3) t(5) Welbull(, 0.5) Appendx: Proofs of Man Results Ts appendx presents te proofs of te lemmas and te teorem n secton 3. To keep notaton smple, we suppress te dependence of n on te sample sze n to. Ten te wegt w n j s convenently abbrevated as w j and q n,n as q n,. 5.1 Proof of Lemma Let {U } n be ndependent observatons from te unform dstrbuton U(0, 1) and put U () equal to 0, U (), 1 accordng to = 0, 1 n, = n + 1. Te followng Lemma 3 s quoted for convenence from Rao and Zao (1997) and ts proof s omtted ere. 44

10 Lemma 3. Assume tat assumpton log n olds, ten de f ( ) T n = max (n + 1) (U ( j) U () ) ( j ) 1 a.s. 0, were max s taken for all (, j) wt 0 < j n + 1 and j C. Let D := f (X ()) q n,n ( n+1 ;Q n) q n,n ( n+1 ;U n), ten V n, = log D + log D + log D I 1 I 3 = V (1) + V () + V (3), were we decompose te ndex set for nto tree segments I 1 := {k N; 1 k }, I := {k N; < k < n } and I 3 := {k N; n k n}. We sall demonstrate te proof for I. Lemma 4. Under assumptons H. and F.3, tere exsts 0 < α 1 suc tat, for n large, almost surely for all I and p (U (m ), U (m )). α f (Q(U ())) f (Q(p)) α 1, Proof. Under assumptons H. and F.3, Lemma 1 of Csörgő and Révész (1978) mples tat f (Q(u 1 )) f (Q(u )) γ (u 1, u ), for every par u 1, u (0, 1) wt γ as n F.3. and (u 1, u ):= u 1 u (1 u 1 u ) u 1 u (1 u 1 u ). It follows from te symmetry of (u 1, u ) tat γ (U (), p) f (Q(U ())) γ (U (), p). f (Q(p)) If U (m ) p U (), ten [ γ (U (), p) 1 U ] () U γ [ (m ) 1 U ] () U γ (m ). 1 U (m ) U () By Lemma 3, wt probablty one, for n large, we ave for all I tat and Hence γ (U (), p) ( 1 4) γ. If U () p U (m ), ten γ (U (), p) 0 U () U (m ) 1 U (m ) U () U (m ) 1 U () 1, 0 U () U (m ) U () 1. [ 1 U ] (m ) U γ [ () 1 U ] (m ) U γ (). 1 U () U (m ) Agan by usng te same arguments gven above, we ave, wt probablty one, for n large, 0 U (m ) U () 1 U () 1 and 0 U (m ) U () U (m ) 1, for all I, wc proves tat γ (U (), p) ( 1 4) γ. So almost surely for all I and U (m ) p U (m ), we ave γ (U (), p) ( 1 γ. 4) Te lemma follows by observng γ (U (), p) (4) γ, almost surely for all I. 45

11 Lemma 5. If log n, ten Proof. Usng mean value teorem, V () n, D = C D 1. f (X () ) w j X ( j) w j U ( j) = f (Q(U ())) f (Q(U )), were U (m ) < U < U (m ). It follows from lemma 4 tat tere exsts a postve constant α 1 suc tat for n large, α D α 1 almost surely for all I. Ts fact togeter wt Taylor s teorem mples tat for all I log D C D 1, and ence V () = log D C D 1. Lemma 6. If n 3 log 4 n 0 as n, ten D 1 = o P (1). Proof. Snce f (X () ) w j X ( j) D = w j U ( j) f (Q(U () )) w j Q(U ( j) ) =. w j U ( j) Applyng Taylor s teorem wt an ntegral form of te remander, we ave w j Q(U ( j) ) [ = Q(U () ) + q(u () )(U ( j) U () ) + w j =q(u () ) w j U ( j) + w j U( j) U () U( j) U () (U ( j) p) dq(p) dp dp. (U ( j) p) dq(p) ] dp dp 46

12 So We ave f (Q(U () ) w j Q(U ( j) ) w j U ( j) = w j D 1 = = U( j) U () (U ( j) p) f (Q(U () ) dq(p) dp dp. U( j) w j (U U ( j) p) f (Q(U () )) dq(p) j J () dp dp w j U ( j) w j (U ( j) U () ) U ( j) U ( j) p U j J () U ( j) U () f (Q(U () )) dq(p) dp dp. w j U ( j) Usng Lemma 4 and w j (U ( j) U () ) 0 for all j J, we ave D 1 w j (U ( j) U () ) U( j) U ( j) p U j J () U ( j) U () f (Q(U () )) dq(p) dp dp w I j U ( j) ( U(m ) w j (U ( j) U () ) C f (Q(p)) dq(p) U j J () dp dp + ) U () C f (Q(p)) dq(p) U (m ) dp dp w I j U ( j) = C ( U(m ) U () = C U(m ) J(p) dp + U () J(p) dp + U () J(p) d p U (m ) w j U ( j) U() U (m ) ) J(p) d p. w j (U ( j) U () ) Let g (n) r,s (x, y) denote te jont densty of te order statstc U (r) and U (s) of a random sample of sze n from te unform U(0, 1) dstrbuton, were 1 r < s n, and let b u,v (x) denote te beta densty functon wt parameters u and v, ten U() E J(p) dp U (m ) 0<x 1 x <1 0<x 1 x <1 = a n m = a n m g (n) (m ), (x 1, x ) x p(1 p) J(p) dpdx 1 dx x 1 (1 x ) x 1 g (n) (m (x x 1 ) ), (x 1, x ) x 1 (1 x ) G 0(x 1 )dx 1 dx 0<x 1 x <1 n (m 1)(n ) g(n 1) (m 1), (x 1, x )G 0 (x 1 )dx 1 dx n (m 1)(n ) 1 n C (m 1)(n ) (R n 1 R m + 1), 0 G 0 (x 1 )b (m 1),n m +1(x 1 )dx 1 47

13 1 y y x x were G 0 (x) = sup 1>y x densty of te unform order statstcs U (m 1) and U () of te sample sze (n 1). Notng tat p(1 p) J(p) dp. Te last equalty follows from te fact tat g(n 1) (m 1), (x 1, x ) s te jont n (m 1)(n ) = n n +m 1 ( 1 m n ), we ave U() E I U (m ) J(p) dp = O((n 3 log 4 n) 1 ) + O((n 3 log n) 1 ) = O((n 3 log 4 n) 1 ). By te same argument, t can be sown tat U(m ) E J(p) dp = O((n 3 log 4 n) 1 ). U () Hence, by assumpton n 3 log 4 n 0 as n, we ave D 1 = o P (1). Lemma s proved by combnng Lemmas 5 and 6 wt an analogous lengty arguments as te above proofs establsng tat te parts correspondng to I 1 and I 3 wt less tan terms are asymptotcally neglgble. 5. Proof of Teorem 1 Put Z k = log(1/u k ), k = 0, 1,, S 0 = 0, S j = j k=1 Z k, k = 1,,, Ũ n (y) = S k S n+1, f k 1 n < y k, k = 1,,, n. n Ten Z k are ndependent exponental random varables wt mean value one and, for eac n, A Taylor expanson sows tat {U n (y); 0 y 1} D = {Ũ n (y); 0 y 1}. U n, D = n = : log q n, ( n+1 ; Ũ n ) n + 1 n + R, were = q n, ( n+1 ; Ũ n ) 1 and R = 1 n 3 3, 0 < θ 1+θ 3 < 1. We defne te centered standard exponental varables as ξ k = Z k 1, 1 k N wt N = n + 1. Let T,n := 1 N (w j ξ j ) were ξ j := j ξ k, Y n := 1 N N ξ, k=1 C k := K( a n k+1 ) K( a n a n + ) and D,n := 1 N jw j. Ten U n, := + + I 1 I 3 = U (1) + U () + U (3), log q n, n ( n+1 ; U n) 48

14 Lemma 7. Assume kernel functon K(x) satsfes condtons (K.1--3); smootng parameter satsfes condtons (H.1--3); and f (x) satsfes condtons (F.1--3). Under null ypotess H 0, we ave n + 1 n + R = n T,n + o P(1). Proof. We can wrte ( + 1 were B 1,n = + R ) = T,n + 11 B,n + R, ( 1 D,n D,n + 3 ), B,n = (D,n 1)T,n, B 3,n = (D,n D,n )Y n, B 4,n = (D,n 1)Y n, B 5,n = (Yn T,n ), B 6,n = ( D,n )T,n Y n, B 7,n = ( 3 D,n D,n) Y n, B 8,n = ( T,n )Y n, B 9,n = (3D,n )T,n Y n, B 10,n = T,n Y n, B 11,n = [ (D,n + D,n T,n + T,n )O ( ) ( )] P n 3 + (D,n + T,n )O P n 3. Usng te facts D,n = 1 + O( 1 ) + O( 1 n 1+ϵ +ϵ ) for any ϵ > 0 and Y n = O p ( 1 n ), elementary calculaton sows tat B,n = o P (1), = 1,,, 11 and R = o p (1). Lemma 8. Assume kernel functon K(x) satsfes condtons (K.1--3); smootng parameter satsfes condtons (H.1--3); and f (x) satsfes condtons (F.1--3). Ten te mean of n T,n s n E T,n = 1 1 ( ) K 1 (x)dx + o. n 3 1 Proof. n E T,n = E 1 = 1 N = 1 N n n a n k a n + a n k a n C k K (x)dx + o ( 1 n 3 ). C k ξ k 49

15 Lemma 9. Assume kernel functon K(x) satsfes condtons (K.1--3); smootng parameter satsfes condtons (H.1--3); and f (x) satsfes condtons (F.1--3). Under null ypotess H 0, we ave (Tn, µ n, (K))) = T,n µ n,(k) + o P(1), were µ n, (K)= K (x)dx + o ( ) 1 n. 3 Proof. (Tn, µ n, (K)) = ( U µ n, (K) ) + V + W n (ˆθ) d = + 1 µ n, (K) + R + o P (1) = T,n µ n,(k) + o P(1). Hence te lmtng beavor of (T n, µ n, (K)) s determned by ( T,n µ n,(k) ). Snce T,n are not ndependent, t naturally comes to mnd tat te martngale approac mgt be used for te asymptotc results for T n,. Lemma 10. Assume kernel functon K(x) satsfes condtons (K.1--3); smootng parameter satsfes condtons (H.1--3); and f (x) satsfes condtons (F.1--3). We ave = (T,n µ n,(k)) Wl + o P (1). (a n +1)<l a n n (a n 1) were W l s a martngale dfference sequence w. r. t. F l 1 = σ(ξ 1, ξ,..., ξ l 1 ). Proof. (T,n µ n,(k)) = 1 N µ n, (K) = 1 N = 1 = + N w j ξ j a n k a n + N a n k a n + 1 N :=A I + A II. a n k a n + 0<k<l a n n (a n 1) C k ξ k µ n, (K) Ck ξ k µ n,(k) + Ck ξ k µ n,(k) N a n k<l a n + ( ) C k C l ξ k ξ l 1 an (l ) 1 an (k+) (n ) C k C l ξ k ξ l 50

16 A II = N + N = N + N + N := (a n +1)<l<a n n (a n 1) 0<k<l 1<l (a n +1) 0<k<l 1 a (k+) n 1 a (l ) 1 n a (k+) (n ) n C k C l ξ k ξ l (a n +1)<l a n n (a n 1) 0<k (l 1) (a n n (a n +1)) a n n (a n +1)<l<a n n (a n 1) a n n (a n +1)<k<l 1<l (a n +1) 0<k<l 1 (a n +1)<l a n n (a n 1) W l + r, C k C l ξ kξ l an (k+) d kl ξ k ξ l 1 a (l ) (n ) n C k C l ξ k ξ l C k C l ξ k ξ l were d kl := W l := N 1 an (l ) 1 = N C k C l an (k+) d kl ξ k ξ l 0<k (l 1) [a n n (a n +1)] (l )<k (l 1) [a n n (a n +1)] d kl ξ k ξ l. Ten W l s a martngale dfference sequence w. r. t. F l 1 = σ(ξ 1, ξ,..., ξ l 1 ). Tus (a n +1) l a n n (a n 1) W l s a martngale sequence wt respect to F n = σ(ξ 1, ξ,..., ξ n ). It can be sown tat A I = o P ( 1 ) and r = o P ( 1 ). Hence we ave (T,n µ n,(k)) = Wl + o P (1). (a n +1)<l a n n (a n 1) So te lemma follows. Te condtonal varance s an ntrnsc measure of tme for a martngale. For many purposes te tme taken for a martngale to cross a level s best represented troug ts condtonal varance rater tan te number of ncrements up to te crossng (see Hall, 1980, p. 54). Te condtonal varance s gven as follows: V n := (a n +1)<l a n n (a n 1) = E = = N 0<k (l 1) a n n (a n +1) N 0<k (l 1) a n n (a n +1) (a n +1)<l a n n (a n 1) [( ) ] E Wl ξ1, ξ,..., ξ l 1 N d kl ξ k ξ l ξ 1, ξ,..., ξ l 1 d kl ξ k ( E ξl 0<k (l 1) a n n (a n +1) ) ξ 1, ξ,..., ξ l 1 d kl ξ k. 51

17 Lemma 11. Assume kernel functon K(x) satsfes condtons (K.1--3); smootng parameter satsfes condtons (H.1--3); and f (x) satsfes condtons (F.1--3). Ten as n, EV n = 0 ( 1 dz K(x)K(x z)dx) + O(). 1+z Proof. = = = ( N ) E V n (a n +1) l a n n (a n 1) E 0<k (l 1) a n (a n +1) (a n +1) l a n n (a n 1) 0<k (l 1) a n (a n +1) d kl d kl ξ k (a n +1) l a n n (a n 1) 0<k (l 1) a n (a n +1) l a n k+ = (a n +1) l a n n (a n +1) 0<k (l 1) l an k+ l an j k+ = K( a n k + 1 (a n +1) l a n n (a n +1) l k an k l k an j k =n 4 3 dz 0 K( a n k z 1 1 z 1 + C k C l a n n (a n +1) l a n n (a n 1) 0<k<a n n (a n +1) )K( a n j k + 1 1<l k + )K( a n j k + 1 )K( a n l + 1 (a n +1) l a n n (a n 1) )K( a n l + 1 K(x)K(y)K(x z)k(y z)dxdy + O(n 4 4 ). )K( a n j l + 1 ) + O(n 4 4 ) l a n n+(a n +1)<l k )K( a n j l + 1 ) + O(n 4 4 ) So E V n = 0 ( 1 dz K(x)K(x z)dx) + O(). 1+z Lemma 1. Assume kernel functon K(x) satsfes condtons (K.1--3); smootng parameter satsfes condtons (H.1--3); and f (x) satsfes condtons (F.1--3). Ten as n 0, Var(V n ) = O(). 5

18 Proof. Var = Var = 16 N 8 6 (a n +1) l a n n (a n 1) E(( W l ) ξ 1,..., ξ l 1 ) ( (a n +1) l a n n (a n 1) 0<k<(l 1) a n n (a n +1) E E = 16 N 8 6 O(n 8 7 ) =O(). (a n +1) l a n n (a n 1) (a n +1) l a n n (a n 1) l <r<(l 1) ann (an+1) l <s<(l 1) ann (an+1) l <r<(l 1) ann (an+1) l <s<(l 1) ann (an+1) ( ) d N kl ξ k ) d rl d sl ξ r ξ s d rl d sl ξ r ξ s Te followng lemma follows mmedately by lemma 11 and lemma 1. Lemma 13. Assume kernel functon K(x) satsfes condtons (K.1--3); smootng parameter satsfes condtons (H.1--3); and f (x) satsfes condtons (F.1--3). Ten V n P σ (K). Lemma 14. Assume kernel functon K(x) satsfes condtons (K.1--3); smootng parameter satsfes condtons (H.1--3); and f (x) satsfes condtons (F.1--3). Ten for any ε > 0, we ave (a n +1) l a n n (a n 1) E [( Wl ) I( W l >ε) ] P ξ 1, ξ,..., ξ l 1 0. Proof. (a n +1) l a n n (a n 1) = 16 N 8 6 = 16 N E( W l ) 4 (a n +1) l a n n (a n 1) E l k (l 1) a n n (a n +1) (a n +1) l a n n (a n 1) l r<s (l 1) a n n (a n +1) (a n +1) l a n n (a n 1) l k (l 1) a n n (a n +1) = 16 N 8 6 O(n 7 6 ) = O( 1 N ). d 4 kl (E ξ4 l ) d kl ξ k ξ l 4 6d rl d sl E ξ4 l By Markov nequalty, ε > 0, (a n +1) l a n n (a n 1) E [( Wl ) I( W l >ε) ] ξ P 1,..., ξ l

19 Teorem 1 follows mmedately from lemma 13 and lemma 14 (See Corollary 3.1 of Hall, 1980, p. 58). References Baadur, R. R. (1958). Examples of nconsstency of maxmum lkelood estmates. Sankyā, 0, Bloc, D., & Gastwrt, J. (1968). On a smple estmate of te recprocal of te densty functon. Ann. Mat. Statst. 39, ttp://dx.do.org/10.114/aoms/ Bofnger, E. (1975). Estmaton of a densty functon usng order statstcs. Australan Journal of Statstcs, 17(1), 1-7. ttp://dx.do.org/ /j x.1975.tb01366.x Csörgő, M., & Révész, P. (1978). Strong approxmatons of te quantle process. Ann. Statst., 6, ttp://dx.do.org/10.114/aos/ D Agostno, R & Stepens, M. (1986). Goodness-of-Ft Tecnques. Marcel Dekker, New York. Falk, M. (1984). Relatve defcency of kernel type estmators of quantles, Ann. Statst., 1, ttp://dx.do.org/10.114/aos/ Falk, M. (1986). On te estmaton of te quantle densty functon. Statst. Probablty Letter, 4, ttp://dx.do.org/ / (86) Fan, J., Zang, C., & Zang, J. (001). Generalzed lkelood rato statstcs and wlks penomenon. Te annals of Statstcs, 9 (1), ttp://dx.do.org/10.114/aos/ Hall, P., & Heyde, C. C. (1980). Martngale Lmt Teory and Its Applcaton. Academc Press. Ingster, Y. I., & Suslna, I. A. (00). Nonparametrc Goodness-of-Ft Testng Under Gaussan Models. Sprnger. Le Cam, L. (1990) Maxmum lkeloodan ntroducton. ISI Revew, 58, Parzen, E. (1979). Nonparametrc statstcal modellng (wt comments). J. Amer. Statst. Assoc., 74, ttp://dx.do.org/ / Rao, C. R., & Zao, L. C. (1997). A lmtng dstrbuton teorem n Festscrft for Lucen Le Cam. Sprnger, Berln, ttp://dx.do.org/ / Ress, R. D. (1978). Approxmate dstrbuton of te maxmum devaton of stograms. Metrka, 5(1), 9-6. ttp://dx.do.org/ /bf Seater, S. J., & Martz, J. S. (1983). An estmate of te asymptotc standard error of te sample medan. Australan Journal of Statstcs, 5(1), ttp://dx.do.org/ /j x.1983.tb0103.x Sddqu, M. M. (1960). Dstrbuton of quanttles n samples from a bvarate populaton. Journal of Researc of te Natonal Bureau of Standards, Secton B, 64, ttp://dx.do.org/10.608/jres.064b.017 Tukey, J. W. (1965) Wc part of te sample contans te most nformaton? Proe. Nat. Acad. Se. U.S.A., 53, ttp://dx.do.org/ /pnas Copyrgts Copyrgt for ts artcle s retaned by te autor(s), wt frst publcaton rgts granted to te journal. Ts s an open-access artcle dstrbuted under te terms and condtons of te Creatve Commons Attrbuton lcense (ttp://creatvecommons.org/lcenses/by/3.0/). 54

Multivariate Ratio Estimator of the Population Total under Stratified Random Sampling

Multivariate Ratio Estimator of the Population Total under Stratified Random Sampling Open Journal of Statstcs, 0,, 300-304 ttp://dx.do.org/0.436/ojs.0.3036 Publsed Onlne July 0 (ttp://www.scrp.org/journal/ojs) Multvarate Rato Estmator of te Populaton Total under Stratfed Random Samplng

More information

Adaptive Kernel Estimation of the Conditional Quantiles

Adaptive Kernel Estimation of the Conditional Quantiles Internatonal Journal of Statstcs and Probablty; Vol. 5, No. ; 206 ISSN 927-7032 E-ISSN 927-7040 Publsed by Canadan Center of Scence and Educaton Adaptve Kernel Estmaton of te Condtonal Quantles Rad B.

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

The Finite Element Method: A Short Introduction

The Finite Element Method: A Short Introduction Te Fnte Element Metod: A Sort ntroducton Wat s FEM? Te Fnte Element Metod (FEM) ntroduced by engneers n late 50 s and 60 s s a numercal tecnque for solvng problems wc are descrbed by Ordnary Dfferental

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Computing MLE Bias Empirically

Computing MLE Bias Empirically Computng MLE Bas Emprcally Kar Wa Lm Australan atonal Unversty January 3, 27 Abstract Ths note studes the bas arses from the MLE estmate of the rate parameter and the mean parameter of an exponental dstrbuton.

More information

On Comparison of Local Polynomial Regression Estimators for P = 0 and P = 1 in a Model Based Framework

On Comparison of Local Polynomial Regression Estimators for P = 0 and P = 1 in a Model Based Framework Internatonal Journal of Statstcs and Probablty; Vol. 7, No. 4; July 018 ISSN 197-703 E-ISSN 197-7040 Publsed by Canadan Center of Scence and Educaton On Comparson of Local Polynomal Regresson Estmators

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

TR/95 February Splines G. H. BEHFOROOZ* & N. PAPAMICHAEL

TR/95 February Splines G. H. BEHFOROOZ* & N. PAPAMICHAEL TR/9 February 980 End Condtons for Interpolatory Quntc Splnes by G. H. BEHFOROOZ* & N. PAPAMICHAEL *Present address: Dept of Matematcs Unversty of Tabrz Tabrz Iran. W9609 A B S T R A C T Accurate end condtons

More information

PubH 7405: REGRESSION ANALYSIS. SLR: INFERENCES, Part II

PubH 7405: REGRESSION ANALYSIS. SLR: INFERENCES, Part II PubH 7405: REGRESSION ANALSIS SLR: INFERENCES, Part II We cover te topc of nference n two sessons; te frst sesson focused on nferences concernng te slope and te ntercept; ts s a contnuaton on estmatng

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

Lecture 6 More on Complete Randomized Block Design (RBD)

Lecture 6 More on Complete Randomized Block Design (RBD) Lecture 6 More on Complete Randomzed Block Desgn (RBD) Multple test Multple test The multple comparsons or multple testng problem occurs when one consders a set of statstcal nferences smultaneously. For

More information

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS M. Krshna Reddy, B. Naveen Kumar and Y. Ramu Department of Statstcs, Osmana Unversty, Hyderabad -500 007, Inda. nanbyrozu@gmal.com, ramu0@gmal.com

More information

Bayesian predictive Configural Frequency Analysis

Bayesian predictive Configural Frequency Analysis Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse

More information

On mutual information estimation for mixed-pair random variables

On mutual information estimation for mixed-pair random variables On mutual nformaton estmaton for mxed-par random varables November 3, 218 Aleksandr Beknazaryan, Xn Dang and Haln Sang 1 Department of Mathematcs, The Unversty of Msssspp, Unversty, MS 38677, USA. E-mal:

More information

U-Pb Geochronology Practical: Background

U-Pb Geochronology Practical: Background U-Pb Geochronology Practcal: Background Basc Concepts: accuracy: measure of the dfference between an expermental measurement and the true value precson: measure of the reproducblty of the expermental result

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Research Article Green s Theorem for Sign Data

Research Article Green s Theorem for Sign Data Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of

More information

Solution for singularly perturbed problems via cubic spline in tension

Solution for singularly perturbed problems via cubic spline in tension ISSN 76-769 England UK Journal of Informaton and Computng Scence Vol. No. 06 pp.6-69 Soluton for sngularly perturbed problems va cubc splne n tenson K. Aruna A. S. V. Rav Kant Flud Dynamcs Dvson Scool

More information

5 The Laplace Equation in a convex polygon

5 The Laplace Equation in a convex polygon 5 Te Laplace Equaton n a convex polygon Te most mportant ellptc PDEs are te Laplace, te modfed Helmoltz and te Helmoltz equatons. Te Laplace equaton s u xx + u yy =. (5.) Te real and magnary parts of an

More information

Strong Markov property: Same assertion holds for stopping times τ.

Strong Markov property: Same assertion holds for stopping times τ. Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up

More information

An (almost) unbiased estimator for the S-Gini index

An (almost) unbiased estimator for the S-Gini index An (almost unbased estmator for the S-Gn ndex Thomas Demuynck February 25, 2009 Abstract Ths note provdes an unbased estmator for the absolute S-Gn and an almost unbased estmator for the relatve S-Gn for

More information

338 A^VÇÚO 1n ò Lke n Mancn (211), we make te followng assumpton to control te beavour of small jumps. Assumpton 1.1 L s symmetrc α-stable, were α (,

338 A^VÇÚO 1n ò Lke n Mancn (211), we make te followng assumpton to control te beavour of small jumps. Assumpton 1.1 L s symmetrc α-stable, were α (, A^VÇÚO 1n ò 1oÏ 215c8 Cnese Journal of Appled Probablty and Statstcs Vol.31 No.4 Aug. 215 Te Speed of Convergence of te Tresold Verson of Bpower Varaton for Semmartngales Xao Xaoyong Yn Hongwe (Department

More information

Goodness of fit and Wilks theorem

Goodness of fit and Wilks theorem DRAFT 0.0 Glen Cowan 3 June, 2013 Goodness of ft and Wlks theorem Suppose we model data y wth a lkelhood L(µ) that depends on a set of N parameters µ = (µ 1,...,µ N ). Defne the statstc t µ ln L(µ) L(ˆµ),

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Joint Statistical Meetings - Biopharmaceutical Section

Joint Statistical Meetings - Biopharmaceutical Section Iteratve Ch-Square Test for Equvalence of Multple Treatment Groups Te-Hua Ng*, U.S. Food and Drug Admnstraton 1401 Rockvlle Pke, #200S, HFM-217, Rockvlle, MD 20852-1448 Key Words: Equvalence Testng; Actve

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition)

See Book Chapter 11 2 nd Edition (Chapter 10 1 st Edition) Count Data Models See Book Chapter 11 2 nd Edton (Chapter 10 1 st Edton) Count data consst of non-negatve nteger values Examples: number of drver route changes per week, the number of trp departure changes

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of

More information

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis

Statistical analysis using matlab. HY 439 Presented by: George Fortetsanakis Statstcal analyss usng matlab HY 439 Presented by: George Fortetsanaks Roadmap Probablty dstrbutons Statstcal estmaton Fttng data to probablty dstrbutons Contnuous dstrbutons Contnuous random varable X

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Convergence of random processes

Convergence of random processes DS-GA 12 Lecture notes 6 Fall 216 Convergence of random processes 1 Introducton In these notes we study convergence of dscrete random processes. Ths allows to characterze phenomena such as the law of large

More information

Competitive Experimentation and Private Information

Competitive Experimentation and Private Information Compettve Expermentaton an Prvate Informaton Guseppe Moscarn an Francesco Squntan Omtte Analyss not Submtte for Publcaton Dervatons for te Gamma-Exponental Moel Dervaton of expecte azar rates. By Bayes

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Numerical Simulation of One-Dimensional Wave Equation by Non-Polynomial Quintic Spline

Numerical Simulation of One-Dimensional Wave Equation by Non-Polynomial Quintic Spline IOSR Journal of Matematcs (IOSR-JM) e-issn: 78-578, p-issn: 319-765X. Volume 14, Issue 6 Ver. I (Nov - Dec 018), PP 6-30 www.osrournals.org Numercal Smulaton of One-Dmensonal Wave Equaton by Non-Polynomal

More information

Lecture 4 Hypothesis Testing

Lecture 4 Hypothesis Testing Lecture 4 Hypothess Testng We may wsh to test pror hypotheses about the coeffcents we estmate. We can use the estmates to test whether the data rejects our hypothess. An example mght be that we wsh to

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 12 10/21/2013. Martingale Concentration Inequalities and Applications MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/15.070J Fall 013 Lecture 1 10/1/013 Martngale Concentraton Inequaltes and Applcatons Content. 1. Exponental concentraton for martngales wth bounded ncrements.

More information

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications Durban Watson for Testng the Lack-of-Ft of Polynomal Regresson Models wthout Replcatons Ruba A. Alyaf, Maha A. Omar, Abdullah A. Al-Shha ralyaf@ksu.edu.sa, maomar@ksu.edu.sa, aalshha@ksu.edu.sa Department

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

On Pfaff s solution of the Pfaff problem

On Pfaff s solution of the Pfaff problem Zur Pfaff scen Lösung des Pfaff scen Probles Mat. Ann. 7 (880) 53-530. On Pfaff s soluton of te Pfaff proble By A. MAYER n Lepzg Translated by D. H. Delpenc Te way tat Pfaff adopted for te ntegraton of

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics ECOOMICS 35*-A Md-Term Exam -- Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35* - Secton A Introductory Econometrcs Fall Term 000 MID-TERM EAM ASWERS MG Abbott

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Statistical inference for generalized Pareto distribution based on progressive Type-II censored data with random removals

Statistical inference for generalized Pareto distribution based on progressive Type-II censored data with random removals Internatonal Journal of Scentfc World, 2 1) 2014) 1-9 c Scence Publshng Corporaton www.scencepubco.com/ndex.php/ijsw do: 10.14419/jsw.v21.1780 Research Paper Statstcal nference for generalzed Pareto dstrbuton

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida Frst Year Examnaton Department of Statstcs, Unversty of Florda May 7, 010, 8:00 am - 1:00 noon Instructons: 1. You have four hours to answer questons n ths examnaton.. You must show your work to receve

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere

b ), which stands for uniform distribution on the interval a x< b. = 0 elsewhere Fall Analyss of Epermental Measurements B. Esensten/rev. S. Errede Some mportant probablty dstrbutons: Unform Bnomal Posson Gaussan/ormal The Unform dstrbuton s often called U( a, b ), hch stands for unform

More information

Modeling and Simulation NETW 707

Modeling and Simulation NETW 707 Modelng and Smulaton NETW 707 Lecture 5 Tests for Random Numbers Course Instructor: Dr.-Ing. Magge Mashaly magge.ezzat@guc.edu.eg C3.220 1 Propertes of Random Numbers Random Number Generators (RNGs) must

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Basic Statistical Analysis and Yield Calculations

Basic Statistical Analysis and Yield Calculations October 17, 007 Basc Statstcal Analyss and Yeld Calculatons Dr. José Ernesto Rayas Sánchez 1 Outlne Sources of desgn-performance uncertanty Desgn and development processes Desgn for manufacturablty A general

More information

A Note on Test of Homogeneity Against Umbrella Scale Alternative Based on U-Statistics

A Note on Test of Homogeneity Against Umbrella Scale Alternative Based on U-Statistics J Stat Appl Pro No 3 93- () 93 NSP Journal of Statstcs Applcatons & Probablty --- An Internatonal Journal @ NSP Natural Scences Publshng Cor A Note on Test of Homogenety Aganst Umbrella Scale Alternatve

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models

Maximum Likelihood Estimation of Binary Dependent Variables Models: Probit and Logit. 1. General Formulation of Binary Dependent Variables Models ECO 452 -- OE 4: Probt and Logt Models ECO 452 -- OE 4 Maxmum Lkelhood Estmaton of Bnary Dependent Varables Models: Probt and Logt hs note demonstrates how to formulate bnary dependent varables models

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one) Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch

More information

Small Area Interval Estimation

Small Area Interval Estimation .. Small Area Interval Estmaton Partha Lahr Jont Program n Survey Methodology Unversty of Maryland, College Park (Based on jont work wth Masayo Yoshmor, Former JPSM Vstng PhD Student and Research Fellow

More information

Direct Methods for Solving Macromolecular Structures Ed. by S. Fortier Kluwer Academic Publishes, The Netherlands, 1998, pp

Direct Methods for Solving Macromolecular Structures Ed. by S. Fortier Kluwer Academic Publishes, The Netherlands, 1998, pp Drect Metods for Solvng Macromolecular Structures Ed. by S. Forter Kluwer Academc Publses, Te Neterlands, 998, pp. 79-85. SAYRE EQUATION, TANGENT FORMULA AND SAYTAN FAN HAI-FU Insttute of Pyscs, Cnese

More information

Iranian Journal of Mathematical Chemistry, Vol. 5, No.2, November 2014, pp

Iranian Journal of Mathematical Chemistry, Vol. 5, No.2, November 2014, pp Iranan Journal of Matematcal Cemstry, Vol. 5, No.2, November 204, pp. 85 90 IJMC Altan dervatves of a grap I. GUTMAN (COMMUNICATED BY ALI REZA ASHRAFI) Faculty of Scence, Unversty of Kragujevac, P. O.

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

Comments on Detecting Outliers in Gamma Distribution by M. Jabbari Nooghabi et al. (2010)

Comments on Detecting Outliers in Gamma Distribution by M. Jabbari Nooghabi et al. (2010) Comments on Detectng Outlers n Gamma Dstrbuton by M. Jabbar Nooghab et al. (21) M. Magdalena Lucn Alejandro C. Frery September 17, 215 arxv:159.55v1 [stat.co] 16 Sep 215 Ths note shows that the results

More information

/ n ) are compared. The logic is: if the two

/ n ) are compared. The logic is: if the two STAT C141, Sprng 2005 Lecture 13 Two sample tests One sample tests: examples of goodness of ft tests, where we are testng whether our data supports predctons. Two sample tests: called as tests of ndependence

More information

Asymptotics of the Solution of a Boundary Value. Problem for One-Characteristic Differential. Equation Degenerating into a Parabolic Equation

Asymptotics of the Solution of a Boundary Value. Problem for One-Characteristic Differential. Equation Degenerating into a Parabolic Equation Nonl. Analyss and Dfferental Equatons, ol., 4, no., 5 - HIKARI Ltd, www.m-har.com http://dx.do.org/.988/nade.4.456 Asymptotcs of the Soluton of a Boundary alue Problem for One-Characterstc Dfferental Equaton

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Statistical Hypothesis Testing for Returns to Scale Using Data Envelopment Analysis

Statistical Hypothesis Testing for Returns to Scale Using Data Envelopment Analysis Statstcal Hypothess Testng for Returns to Scale Usng Data nvelopment nalyss M. ukushge a and I. Myara b a Graduate School of conomcs, Osaka Unversty, Osaka 560-0043, apan (mfuku@econ.osaka-u.ac.p) b Graduate

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

COMP4630: λ-calculus

COMP4630: λ-calculus COMP4630: λ-calculus 4. Standardsaton Mcael Norrs Mcael.Norrs@ncta.com.au Canberra Researc Lab., NICTA Semester 2, 2015 Last Tme Confluence Te property tat dvergent evaluatons can rejon one anoter Proof

More information

Not-for-Publication Appendix to Optimal Asymptotic Least Aquares Estimation in a Singular Set-up

Not-for-Publication Appendix to Optimal Asymptotic Least Aquares Estimation in a Singular Set-up Not-for-Publcaton Aendx to Otmal Asymtotc Least Aquares Estmaton n a Sngular Set-u Antono Dez de los Ros Bank of Canada dezbankofcanada.ca December 214 A Proof of Proostons A.1 Proof of Prooston 1 Ts roof

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics )

Statistical Inference. 2.3 Summary Statistics Measures of Center and Spread. parameters ( population characteristics ) Ismor Fscher, 8//008 Stat 54 / -8.3 Summary Statstcs Measures of Center and Spread Dstrbuton of dscrete contnuous POPULATION Random Varable, numercal True center =??? True spread =???? parameters ( populaton

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

Lecture 4: September 12

Lecture 4: September 12 36-755: Advanced Statstcal Theory Fall 016 Lecture 4: September 1 Lecturer: Alessandro Rnaldo Scrbe: Xao Hu Ta Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer: These notes have not been

More information

, rst we solve te PDE's L ad L ad n g g (x) = ; = ; ; ; n () (x) = () Ten, we nd te uncton (x), te lnearzng eedbac and coordnates transormaton are gve

, rst we solve te PDE's L ad L ad n g g (x) = ; = ; ; ; n () (x) = () Ten, we nd te uncton (x), te lnearzng eedbac and coordnates transormaton are gve Freedom n Coordnates Transormaton or Exact Lnearzaton and ts Applcaton to Transent Beavor Improvement Kenj Fujmoto and Tosaru Suge Dvson o Appled Systems Scence, Kyoto Unversty, Uj, Kyoto, Japan suge@robotuassyoto-uacjp

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

The finite element method explicit scheme for a solution of one problem of surface and ground water combined movement

The finite element method explicit scheme for a solution of one problem of surface and ground water combined movement IOP Conference Seres: Materals Scence and Engneerng PAPER OPEN ACCESS e fnte element metod explct sceme for a soluton of one problem of surface and ground water combned movement o cte ts artcle: L L Glazyrna

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

Inductance Calculation for Conductors of Arbitrary Shape

Inductance Calculation for Conductors of Arbitrary Shape CRYO/02/028 Aprl 5, 2002 Inductance Calculaton for Conductors of Arbtrary Shape L. Bottura Dstrbuton: Internal Summary In ths note we descrbe a method for the numercal calculaton of nductances among conductors

More information

x i1 =1 for all i (the constant ).

x i1 =1 for all i (the constant ). Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by

More information

A Note on Bound for Jensen-Shannon Divergence by Jeffreys

A Note on Bound for Jensen-Shannon Divergence by Jeffreys OPEN ACCESS Conference Proceedngs Paper Entropy www.scforum.net/conference/ecea- A Note on Bound for Jensen-Shannon Dvergence by Jeffreys Takuya Yamano, * Department of Mathematcs and Physcs, Faculty of

More information

Statistics Chapter 4

Statistics Chapter 4 Statstcs Chapter 4 "There are three knds of les: les, damned les, and statstcs." Benjamn Dsrael, 1895 (Brtsh statesman) Gaussan Dstrbuton, 4-1 If a measurement s repeated many tmes a statstcal treatment

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

Maximum Likelihood Estimation (MLE)

Maximum Likelihood Estimation (MLE) Maxmum Lkelhood Estmaton (MLE) Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Wnter 01 UCSD Statstcal Learnng Goal: Gven a relatonshp between a feature vector x and a vector y, and d data samples (x,y

More information