Nonparametric Conditional Density Estimation

Size: px
Start display at page:

Download "Nonparametric Conditional Density Estimation"

Transcription

1 Noparametric Coditioal Desity Estimatio Bruce E Hase Uiversity of Wiscosi wwwsscwiscedu/~bhase November 004 Prelimiary ad Icomplete Research supported by the Natioal Sciece Foudatio Departmet of Ecoomics, 1180 Observatory Drive, Uiversity of Wiscosi, Madiso, WI

2 1 Itroductio Coditioal desity fuctios are a useful way to display ucertaity This paper ivestigates oparametric kerel methods for their estimatio The stadard estimator is the ratio of the joit desity estimate to the margial desity estimate Our proposal is to istead use a two-step estimator, where the first step cosists of estimatio of the coditioal mea, ad the secod step cosists of estimatig the coditioal desity of the regressio error If most of the depedece is captured by the coditioal mea, the secod step will require less smoothig, thereby reducig estimatio variace Coditioal desity estimatio was itroduced by Roseblatt (1969) A bias correctio was proposed by Hydma, Bashtayk ad Gruwald (1996) Fa, Yao ad Tog (1996) proposed a direct estimator based o local polyomial estimatio; see also Sectio 65 of Fa ad Yao (003) Badwidth selectio rules have bee proposed by Bashtayk ad Hydma (001), Fa ad Yim (004), ad Hall, Racie ad Li (004) The related problem of coditioal distributio estimatio is examied i Hall, Wolff ad Yao (1999) Other papers have used coditioal desity estimates as a iput to other problems, icludig Robiso (1991), Tjostheim (1994), Poloik ad Yao (000) ad Hydma ad Yao (00) Our two-step coditioal desity estimator is partially motivated by the two-step coditioal variace estimator of Fa ad Yao (1998) They showed that two-step estimatio is asymptotically efficiet sice the first-step coditioal mea estimate does ot affect the asymptotic distributio of the secodstep variace estimator We show here that this property also applies to coditioal desity estimatio Ouraalysisiscofied to the case of a real-valued coditioig variable The geeralizatio to the case of vector-valued coditioig variables should be straightforward, so log as the coditioig set for the coditioal mea ad coditioal desity are idetical However, if the coditioal desity of the regressio error has a reduced coditioig set relative to the coditioal mea, the aalysis chages (For example, if the coditioal mea has two variables ad the coditioal error desity oly oe) I this case the secod-step estimator may ot be asymptotically idepedet of the first-step More importatly, it appears that the two-step estimator may achieve a improved covergece rate relative to the covetioal direct estimator This aalysis is more ivolved ad remais to be completed Our two-step estimator could also be geeralized to three steps, where a itermediate step estimates the coditioal variace We expect the qualitative aalysis to be similar, ad cojecture that there will be further improvemets i estimatio efficiecy This work remais to be completed Furthermore, our discussio is based o local average estimates Alteratively, the mea, variace, or desity ca be estimated usig local liear estimators This should be explored, as local liear estimators have better bias properties tha local averages (ad thus have improved efficiecy) whe there is o-trivial depedece Other tha chages i the bias expressios, however, we expect that o importat chages will arise i the theory Agai, this work remais to be completed The orgaizatio of the remaider of the paper is as follows Sectio itroduces the framework, Sectio 3 the oe-step estimator, Sectio 4 the ew two-step estimator, ad Sectio 5 compares their asymptotic biases Sectio 6 discusses cross-validatio for badwidth selectio Sectio 7 presets simulatio evidece, ad Sectio 8 a applicatio to US GDP Proofs are preseted i the Appedix 1

3 Framework The observables {Y i,x i } i R R are strictly statioary ad strog mixig Let f(y, x) ad f (y x) deote the joit ad coditioal desity fuctios, ad let f(x) deote the margial desity of X i The goal is estimatio of f (y x) Our estimators will be based o kerel regressio Let K(x) :R R deote a bouded symmetric kerel fuctio ad set σ K = R R u K(u)du ad R(K) = R R K(u) du For a badwidth h let K h (u) = h 1 K (u/h) Defie the derivatives 3 Oe-Step Estimator f (r) f (r) (x) = r x r f (x) (s) (y x) = r+s y r f (y x) xs Let h 1 ad h be badwidths Stadard kerel estimators of f(y, x), f(x) ad f (y x) are f(y, x) = 1 f(x) = 1 K h (x X i ) K h1 (y Y i ) K h (x X i ) ad f (y x) = f(y, x) f(x) = K h (x X i ) K h1 (y Y i ) K h (x X i ) Asymptotic approximatios show that it it optimal for estimatio of f (y x) to set h 1 = c 1 1/6 ad h = c 1/6 for c 1 > 0 ad c > 0 Uder stadard regularity coditios the coditioal desity estimator has the asymptotic distributio 1/3 f (y x) f (y x) d N θ 1,σ 1 where ad θ 1 = σ K c c 1 c 1f () (y x)+c f () (y x)+c f (1) (y x) f (1) (x) σ = R(K) f (y x) c 1 c f (x) Observe that the rate of covergece is O 1/3, the same as for bivariate desity estimatio It slower tha the O /5 rate obtaied for uivariate desity estimatio ad bivariate regressio

4 4 Two-Step Estimator Defie the coditioal mea m(x) =E (Y i X i = x) so that Y i = m (X i )+e i ad e i is a regressio error Lettig g (e x) deote the coditioal desity of e i give X i = x, we have the equivalece f (y x) =g (y m(x) x) From this equatio we ca see that a alterative method for estimatio of f is through estimatio of g ad m Let b 0, ad b be badwidths The Nadaraya-Watso estimator of m(x) is ˆm (x) = K b0 (x X i ) Y i K b0 (x X i ) with residuals ê i = Y i ˆm (X i ) A secod-stage estimator of g is ĝ (e x) = K b (x X i ) K b1 (e ê i ) K b (x X i ) Together we obtai the two-step estimator ˆf (y x) = ĝ (y ˆm(x) x) K b (x X i ) K b1 (y ˆm(x) ê i ) Assume that b 0 = a 0 1/5, = a 1 1/6 ad b = a 1/6 = K b (x X i ) Theorem 1 1/3 ˆf (y x) f (y x) θ d N θ,σ 3

5 where with e = y m(x) ad θ = σ a a 1 a 1g () (e x)+a g () (e x)+a g (1) (e x) f (1) (x) σ = R(K) f (y x) a 1 a f (x) This result states that the asymptotic distributio of the two-step estimator is uaffected by the first estimatio step The badwidth b 0 does ot eter the first-order approximatio, ad the distributio isthesameaswhethemeam(x) ad errors e i are kow without estimatio This occurs because the coditioal mea estimator ˆm(x) coverges at the faster rate of O /5 5 Bias Compariso Note that the scaled ˆf ad f have differig biases We ca compare the latter by observig that f () (y x) =g () (e x) f (1) (y x) =g (1) (e x) g (1) (e x) m (1) (x) f () (y x) =g () (e x) g (1) (e x) m () (x)+g () (e x) m (1) (x) Therefore µ θ 1 = σ (K) c 1 g c 1 c () (e x) g (1) (e x) m () (x)+g () (e x) m (1) (x) + σ (K) c 1 c c g () (e x)+g (1) (e x) g (1) (e x) m (1) (x) f (1) (x) Uless m (1) (x) =0,θ 1 has more compoets tha θ, ad will typically be larger (for equal badwidths) Thus ˆf has lower bias tha f, eablig the selectio of a larger badwidth scale a for ˆf tha b for f, reducig variace ad mea-squared-error 6 Badwidth Selectio Fa ad Yim (004) ad Hall, Racie ad Li (004) have proposed a cross-validatio method appropriate for oparametric coditioal desity estimators I this sectio we describe this method ad its applicatio to our estimators For a estimator f (y x) of f (y x) defie the itegrated squared error I = = Z Z f (y x) f (y x) f(x)dydx Z Z Z Z Z Z f (y x) f(x)dydx f (y x) f (y x) f(x)dydx + = I 1 I + I 3 f (y x) f(x)dydx 4

6 Note that I 3 does ot deped o the badwidths ad is thus irrelvat Ideally, we would like to pick the badwidths to miimize I, but this is ifeasible as the fuctio I is ukow Cross-validatio replaces it with a estimate based o the leave-oe-out priciple Let f i (y x) deote the estimator f (y X) with observatio i omitted The cross-validatio estimators of I 1 ad I are Î 1 = 1 Î = 1 We the defie the cross-validatio fuctio as Z f i (y X i ) dy f i (Y i X i ) Î = Î1 Î The cross-validated badwidths are those which joitly miimize Î For the oe-step estimator these equal compoets equal Î = 1 X K h (X i X j ) K h1 (Y i Y j ) j6=i X K h (X i X j ) j6=i ad Î 1 = 1 = 1 P P j6=i k6=i K h (X i X j ) K h (X i X k ) R K h1 (y Y j ) K h1 (y Y k ) dy P j6=i K h (X i X j ) P P j6=i k6=i K h (X i X j ) K h (X i X k ) K h 1 (Y k Y j ) P, j6=i K h (X i X j ) the secod equality whe K(u) =φ(u), the Gaussia kerel For the two-step estimator we first defie the leave-oe-out Nadaraya-Watso regressio estimator ˆm i (x) ad leave-oe-out residual ê i = Y i ˆm i (X i ) The the estimators Î1 ad Î take the form Î 1 = 1 P j6=i Pk6=i K b (X i X j ) K b (X i X k ) K ê k ê j P j6=i K b (X i X j ) ad Î = 1 X K b (X i X j ) K b1 ê i ê j j6=i X K b (X i X j ) j6=i 5

7 These deped o the badwidth b 0 through the residual ê i Alteratively for the two-step estimator the badwidths may be selected separately for each step Specifically, the badwidth b 0 may be selected by least-squares cross-validatio, ad the (,b ) by usig the method for the oe-step estimator, with the Nadaraya-Watso residual ê i replacig Y i 7 Simulatio Evidece The performace of the oparametric estimators were compared i a simple stochastic settig The data are geerated by the process x i N(0, 1) y i x i N µ β 1 x i, 1+β x i 1+β 1000 samples of size = 100 were geerated We vary β 1 amog 01, 1, ad, ad β amog 01 ad 1 O each sample, the oe-step estimator f (y x) ad two-step estimator ˆf (y x) were calculated, usig a Gaussia kerel We measure accuracy by mea itegrated squared error I( f) = 100 E Z Z f (y x) f (y x) f(x)dydx where the itegrals are approximated by a grid o (y, x) The estimators deped critically o the badwidths h =(h 1,h ) ad b =(b 0,,b ) For our first compariso, we use the ifeasible oracle badwidth This is the badwidth which miimizes the fiite sample MISE This eables a compariso of the estimatio methods free of depedece o badwidth selectio methods For the two estimators Table 1 reports the MISE ad the oracle badwidths The results are as expected For the case of small coditioal mea effect (β 1 =01), the the two estimators perform similarly i terms of MISE However, if the coditioal mea effect is o-trivial, the the two-step estimator ˆf has much smaller MISE The reductio i MISE is as much as 50% Table 1 Mea Itegrated Squared Error Usig Oracle Badwidth = 100 β 1 β I( f) I( ˆf) h 1 h b 0 b

8 For our secod compariso, we use data-depedet badwidths For the oe-step estimator f we use the cross-validated badwidth For the two-step estimator ˆf we use sequetial badwidths The badwidth ˆb 0 is selected by least-squares cross-validatio for the mea, ad (ˆ, ˆb ) are selected by coditioal desity cross-validatio usig the estimated residuals Table reports the MISE for the two estimators It also reports the media data-depedet badwidths The qualitative results are similar to those for the optimal badwidths, with the otable chage that the improvemet of the two-step estimator relative to the oe-step estimator has bee reduced For the cases with small coditioal mea effect (β 1 =01) the MISE is eve somewhat higher for ˆf tha for f, but i the other cases ˆf has much lower MISE This suggests that further ivestigatio ito badwidth selectio may yield further improvemets Table Mea Itegrated Squared Error Usig Data-Depedet Badwidths = 100 β 1 β I( f) I( ˆf) ĥ 1 ĥ ˆb0 ˆb1 ˆb Applicatio We illustrate the method with a time-series applicatio Let Y t deote US quarterly real GDP ad let y t = 100(l(Y t ) l(y t 1 )) deotes its growth rate We are iterested i estimatio of the oe-step ahead coditioal desity f(y t y t 1 ) Due to strog evidece of a shift i variace i the early 1980s, we use the sample period 1983:1-004:3 which results i a small sample First, for a baselie we take the liear Gaussia model, for which least-squares yield the estimate ˆf 0 (y t y t 1 )=φ 05 (y t 5 4y t 1 ) Secod, we estimate f(y t y t 1 ) usig the oe-step estimator with cross-validated badwidth, ad let this estimate be deoted as ˆf 1 (y t y t 1 ) The cross-validated badwidths are h 1 = 6 ad h = 0 Third, we estimate the coditioal desity usig the two-step estimator with sequetial crossvalidated badwidth, ad deote this estimator as ˆf (y t y t 1 ) The cross-validated badwidths are h 0 = 15, h 1 = 1 ad h = 59 Thelattervalueofh meas that cross-validatio elimiates the 7

9 coditioal smoothig i the secod step, so the estimated coditioal desity oly depeds o y t 1 through the estimated coditioal mea This is ot surprisig due to our applicatio to a small sample This also highlights a importat distictio betwee the oe-step ad two-step estimators, as the former does ot have this flexibility Figures 1 through 4 display the three desity estimates as a fuctio of y t for four fixed values of y t 1 I geeral, the three estimators differ from oe aother I particular, the iefficiet oe-step estimator appears to be mis-cetered ad over-dispersed i Figure 1 (y t 1 = ), adiallcaseshasa thicker right tail tha the two-step estimator 8

10 Refereces [1] Bashtayk, DM ad Rob J Hydma (001): Badwidth selectio for kerel coditioal desity estimatio, Computatioal Statistics ad Data Aalysis, 36, [] Fa, Jiaqig ad Qiwei Yao (1998): Efficiet estimatio of coditioal variace fuctios i stochastic regressio, Biometrika, 85, [3] Fa, Jiaqig ad Qiwei Yao (003): Noliear Time Series: Noparametric ad Parametric Methods New York: Spriger-Verlag [4] Fa, Jiaqig, Qiwei Yao, ad Howell Tog (1996): Estimatio of coditioal desities ad sesitivity measures i oliear dyamical systems, Biometrika, 83, [5] Fa, Jiaqig ad Tsz Ho Yim (004): A cross-validatio method for estimatig coditioal desities, Biometrika, forthcomig [6] Hall, Peter, Jeff Racie ad Qi Li (004): Cross-validatio ad the estimatio of coditioal probability desities, workig paper [7] Hall, Peter, RCL Wolff ad Qiwei Yao (1999): Methods for estimatig a coditioal distributio fuctio, Joural of the America Statistical Associatio, 94, [8] Hase, Bruce E (004): Uiform Covergece Rates for Kerel Estimatio, workig paper [9] Hydma, Rob J ad Qiwei Yao (00): Noparametric estimatio ad symmetry tests for coditioal desity fuctios, Noparametric Statistics, 14, [10] Hydma, Rob J, DM Bashtayk ad GK Gruwald (1996): Estimatig ad visualizig coditioal desities, Joural of Computatioal ad Graphical Statistics, 5, [11] Poloik, W ad Qiwei Yao (000): Coditioal miimum volume preditive regios for stochastic processes, Joural of the America Statistical Associatio, 95, [1] Robiso, Peter M (1991): Cosistet oparametric etropy-based testig, Review of Ecoomic Studies, 58, [13] Roseblatt, M (1969): Coditioal probability desity ad regressio estimates, i Multivariate Aalysis II, Ed PR Krishaiah, pp 5-31 New York: Academic Press [14] Tjostheim, D (1994): No-liear time series: A selective review, Scadiavia Joural of Statistics, 1,

11 9 Appedix The proofs cotaied here are icomplete sketches, ad omit regularity coditios We first state a result from Hase (004) Lemma 1 Let Uder regularity coditios Ĝ(x, z) = 1 h 1 h sup x R,z R µ µ x Xi z Zi ψ (Y i,x i,z i ) G 1 G h 1 h à µ! log 1/ Ĝ(x, z) EĜ(x, z) = Op h 1 h Lemma Uiformly for f(x) δ =(log) 1/, if b 0 = c 0 1/5, ˆm(x) m(x) =f(x) 1 1 K b0 (x X i ) e i b 0σ (K)f(x) 1 f (1) (x)m (1) (x)+o p (log ) 3/5 Proof We start with the decompositio ˆm(x) m(x) = f(x) 1 1 K b0 (x X i ) e i (1) +f(x) 1 1 K b0 (x X i )(m(x i ) m(x)) Ã! K b0 (x X i ) f(x) 1 1 K b0 (x X i )(e i + m (X i ) m(x)) We ow examie the terms o the right-had-side First, it is well kow that EK b0 (x X i )=f(x)+o(b 0)=f(x)+O /5 Combied with Lemma 1, uiformly i x R 1 By a Taylor expasio, uiformly for f(x) δ, à 1 à µlog! 1/ K b0 (x X i ) = EK b0 (x X i )+O p b 0 /5 = f(x)+o p (log ) 1/! 1 /5 K b0 (x X i ) = f(x) 1 + O p δ (log ) 1/ 10

12 Secod, ote Z E (K b0 (x X i )(x X i )) = K b0 (x u)(x u) f(u)du RZ = b 0 K (u) uf(x b 0 u)du R = b 0f (1) (x)σ (K)+O(b 4 0) The by a Taylor expasio ad Lemma 1 1 K b0 (x X i )(m(x i ) m(x)) ' 1 uiformly i x Similarly, by Lemma 1 Thus (1) equal 1 K b0 (x X i )(x X i ) m (1) (x) = b 0σ (K)f (1) (x)m (1) (x)+o p (log ) 1/ 3/5 K b0 (x X i ) e i = O p (log ) 1/ /5 ˆm(x) m(x) = f(x) 1 1 K b0 (x X i ) e i b 0σ (K)f(x) 1 f (1) (x)m (1) (x) 3/5 4/5 +f(x) 1 O p (log ) 1/ + O p δ (log ) as claimed = f(x) 1 1 K b0 (x X i ) e i b 0σ (K)f(x) 1 f (1) (x)m (1) (x)+o p (log ) 3/5 Defie Lemma 3 For b = c 1/6 ĝ (e x) = K b (x X i ) K b1 (e e i ) K b (x X i ) ĝ (e x) ĝ (e x) =O p ((log ) 1/ /5 ) 11

13 Proof Observethat ĝ (e x) ĝ (e x) = B 1 A = 1 B = 1 A K b (x X i )(K b1 (e ê i ) K b1 (e e i )) K b (x X i ) Sice EK b (x X i )=f(x)+o(b )=f(x)+o 1/3 the usig Lemma 1, uiformly i x R B = f(x)+o 1/3 ad by a Taylor expasio, uiformly for f(x) δ, B 1 f(x) 1 = O p à µlog! 1/ + O p = f(x)+o p 1/3 b 1/3 δ Next, to decompose A, first observe that by a Taylor expasio K b1 (e ê i ) K b1 (e e i ) ' K (1) (e e i )(e i ê i ) Secod, by Lemma, uiformly i i ˆm (X i ) m (X i )=f (X i ) 1 1 b 0 µ Xi X j K j=1 b 0 = 1 µ e b K (1) ei 1 (ˆm (X i ) m (X i )) e j b 0σ (K)f(X i ) 1 f (1) (X i )m (1) (X i )+O p (log ) 3/5 1

14 Together A ' 1 K b (x X i ) 1 µ e b K (1) ei 1 f (X i ) 1 1 µ Xi X j K e j b 0σ (K)f(X i ) 1 f (1) (X i )m (1) (X i )+O p (log 3/5 ) b 0 b j=1 0 1 X µ x Xi = b 0 b 1 b + K(0) b 0 b 1 b 1 i6=j K b µ x Xi K b b 0 σ (K) µ x Xi b 1 b K b + 1 µ x Xi b 1 b K b = A 1 + A + A 3 + A 4 K (1) µ e ei K (1) µ e ei K (1) µ e ei K (1) µ e ei K µ Xi X j b 0 f (X i ) 1 e i f (X i ) 1 e j f(x i ) 1 f (1) (X i )m (1) (X i ) O p (log ) 3/5 say We ow examie the four terms o the right-had-side, i reverse First, observe that E 1 µ b 1 b K µ x Xi b Thus usig Lemma 1 A 4 = µ e K (1) ei à g (1) (e x) f(x)+o = O p (log ) 3/5 Z Z µ µ 1 x u e v = b 1 b K K (1) g (v u) f(u)dvdu b = 1 Z Z K (u) K (1) (v) g (e v x b u) f (x b u) dvdu Z = K (1) (v) vg (1) (e x) f(x)dv + O b 1 + O b = g (1) (e x) f(x)+o 1/3 1/3 à µ + 1!! log 1/ O p O p (log 3/5 ) b 13

15 Secod, µ µ E b 0 x Xi e b 1 b K K (1) ei f(x i ) 1 f (1) (X i )m (1) (X i ) b Z Z µ µ = b 0 x u e v b 1 b K K (1) f (1) (u)m (1) (u)g (v u) dvdu b = b 0f (1) (x)m (1) (x)g (1) (e x)+o /3 = O( /5 ) so by Lemma 1 Third, similarly, Thus ad A 3 = O( /5 )+ b 0 O p E 1 b 0 b 1 b K µ x Xi b à µ! log 1/ = O p ( /5 ) b µ µ e K (1) ei f (X i ) 1 1 e i = O b 0 µ 1 EA = O = O 5/6 b 0 A = O 5/6 à µ + 1! log 1/ O p = O 5/6 b 0 b Fially, we tur to A 1 Note that EA 1 =0 A tedious argumet [to be completed] bouds E(A 1 ) Together, we have A = O p ( /5 ) ad hece ĝ (e x) ĝ (e x) = f(x) 1 + O p δ 1/3 O p ( /5 ) = O p ((log ) 1/ /5 ) ProofofTheorem1 By Lemma 3, a Taylor expasio ˆf (y x) = ĝ (y ˆm(x) x) = ĝ (y ˆm(x) x)+o p ((log ) 1/ /5 ) By a Taylor expasio ĝ (y ˆm(x) x) ĝ (y m(x) x) sup (e x) ˆm(x) m(x) + O p((log ) 1/ /5 ) e,x = O p ((log ) 1/ /5 ) 14

16 Hece ˆf (y x) =ĝ (e x)+o p ((log ) 1/ /5 ) with e = y m(x) ad therefore 1/3 ˆf (y x) f (y x) = 1/3 (ĝ (e x) g (e x) θ )+O p ((log ) 1/ 1/3 /5 ) d N θ,σ as the asymptotic distributio of ĝ is well kow 15

17

18

Study the bias (due to the nite dimensional approximation) and variance of the estimators

Study the bias (due to the nite dimensional approximation) and variance of the estimators 2 Series Methods 2. Geeral Approach A model has parameters (; ) where is ite-dimesioal ad is oparametric. (Sometimes, there is o :) We will focus o regressio. The fuctio is approximated by a series a ite

More information

Kernel density estimator

Kernel density estimator Jauary, 07 NONPARAMETRIC ERNEL DENSITY ESTIMATION I this lecture, we discuss kerel estimatio of probability desity fuctios PDF Noparametric desity estimatio is oe of the cetral problems i statistics I

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

THE DATA-BASED CHOICE OF BANDWIDTH FOR KERNEL QUANTILE ESTIMATOR OF VAR

THE DATA-BASED CHOICE OF BANDWIDTH FOR KERNEL QUANTILE ESTIMATOR OF VAR Iteratioal Joural of Iovative Maagemet, Iformatio & Productio ISME Iteratioal c2013 ISSN 2185-5439 Volume 4, Number 1, Jue 2013 PP. 17-24 THE DATA-BASED CHOICE OF BANDWIDTH FOR KERNEL QUANTILE ESTIMATOR

More information

A Weak Law of Large Numbers Under Weak Mixing

A Weak Law of Large Numbers Under Weak Mixing A Weak Law of Large Numbers Uder Weak Mixig Bruce E. Hase Uiversity of Wiscosi Jauary 209 Abstract This paper presets a ew weak law of large umbers (WLLN) for heterogeous depedet processes ad arrays. The

More information

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS J. Japa Statist. Soc. Vol. 41 No. 1 2011 67 73 A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS Yoichi Nishiyama* We cosider k-sample ad chage poit problems for idepedet data i a

More information

Kernel Adjusted Conditional Density Estimation

Kernel Adjusted Conditional Density Estimation Joural of Numerical Mathematics ad Stochastics, 10(1) : 0-31, 018 http://www.jmas.org/jmas10-.pdf JNM@S Euclidea Press, LLC Olie: ISSN 151-30 Kerel Adjusted Coditioal Desity Estimatio Z.CHAKHCHOUKH, ad

More information

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1 EECS564 Estimatio, Filterig, ad Detectio Hwk 2 Sols. Witer 25 4. Let Z be a sigle observatio havig desity fuctio where. p (z) = (2z + ), z (a) Assumig that is a oradom parameter, fid ad plot the maximum

More information

Regression with an Evaporating Logarithmic Trend

Regression with an Evaporating Logarithmic Trend Regressio with a Evaporatig Logarithmic Tred Peter C. B. Phillips Cowles Foudatio, Yale Uiversity, Uiversity of Aucklad & Uiversity of York ad Yixiao Su Departmet of Ecoomics Yale Uiversity October 5,

More information

Journal of Multivariate Analysis. Superefficient estimation of the marginals by exploiting knowledge on the copula

Journal of Multivariate Analysis. Superefficient estimation of the marginals by exploiting knowledge on the copula Joural of Multivariate Aalysis 102 (2011) 1315 1319 Cotets lists available at ScieceDirect Joural of Multivariate Aalysis joural homepage: www.elsevier.com/locate/jmva Superefficiet estimatio of the margials

More information

Bull. Korean Math. Soc. 36 (1999), No. 3, pp. 451{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Seung Hoe Choi and Hae Kyung

Bull. Korean Math. Soc. 36 (1999), No. 3, pp. 451{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Seung Hoe Choi and Hae Kyung Bull. Korea Math. Soc. 36 (999), No. 3, pp. 45{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Abstract. This paper provides suciet coditios which esure the strog cosistecy of regressio

More information

Precise Rates in Complete Moment Convergence for Negatively Associated Sequences

Precise Rates in Complete Moment Convergence for Negatively Associated Sequences Commuicatios of the Korea Statistical Society 29, Vol. 16, No. 5, 841 849 Precise Rates i Complete Momet Covergece for Negatively Associated Sequeces Dae-Hee Ryu 1,a a Departmet of Computer Sciece, ChugWoo

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

It should be unbiased, or approximately unbiased. Variance of the variance estimator should be small. That is, the variance estimator is stable.

It should be unbiased, or approximately unbiased. Variance of the variance estimator should be small. That is, the variance estimator is stable. Chapter 10 Variace Estimatio 10.1 Itroductio Variace estimatio is a importat practical problem i survey samplig. Variace estimates are used i two purposes. Oe is the aalytic purpose such as costructig

More information

1 Covariance Estimation

1 Covariance Estimation Eco 75 Lecture 5 Covariace Estimatio ad Optimal Weightig Matrices I this lecture, we cosider estimatio of the asymptotic covariace matrix B B of the extremum estimator b : Covariace Estimatio Lemma 4.

More information

Nonparametric Estimation of Regression Functions in the Presence of Irrelevant Regressors

Nonparametric Estimation of Regression Functions in the Presence of Irrelevant Regressors Noparametric Estimatio of Regressio Fuctios i the Presece of Irrelevat Regressors Peter Hall Ceter for Mathematics ad its Applicatios Australia Natioal Uiversity Caberra, ACT 000, Australia Qi Li Departmet

More information

Lecture 7: Density Estimation: k-nearest Neighbor and Basis Approach

Lecture 7: Density Estimation: k-nearest Neighbor and Basis Approach STAT 425: Itroductio to Noparametric Statistics Witer 28 Lecture 7: Desity Estimatio: k-nearest Neighbor ad Basis Approach Istructor: Ye-Chi Che Referece: Sectio 8.4 of All of Noparametric Statistics.

More information

Statistical Inference Based on Extremum Estimators

Statistical Inference Based on Extremum Estimators T. Rotheberg Fall, 2007 Statistical Iferece Based o Extremum Estimators Itroductio Suppose 0, the true value of a p-dimesioal parameter, is kow to lie i some subset S R p : Ofte we choose to estimate 0

More information

Lecture 33: Bootstrap

Lecture 33: Bootstrap Lecture 33: ootstrap Motivatio To evaluate ad compare differet estimators, we eed cosistet estimators of variaces or asymptotic variaces of estimators. This is also importat for hypothesis testig ad cofidece

More information

REGRESSION WITH QUADRATIC LOSS

REGRESSION WITH QUADRATIC LOSS REGRESSION WITH QUADRATIC LOSS MAXIM RAGINSKY Regressio with quadratic loss is aother basic problem studied i statistical learig theory. We have a radom couple Z = X, Y ), where, as before, X is a R d

More information

Bayesian Methods: Introduction to Multi-parameter Models

Bayesian Methods: Introduction to Multi-parameter Models Bayesia Methods: Itroductio to Multi-parameter Models Parameter: θ = ( θ, θ) Give Likelihood p(y θ) ad prior p(θ ), the posterior p proportioal to p(y θ) x p(θ ) Margial posterior ( θ, θ y) is Iterested

More information

4 Conditional Distribution Estimation

4 Conditional Distribution Estimation 4 Coditioal Distributio Estimatio 4. Estimators Te coditioal distributio (CDF) of y i give X i = x is F (y j x) = P (y i y j X i = x) = E ( (y i y) j X i = x) : Tis is te coditioal mea of te radom variable

More information

Investigating the Significance of a Correlation Coefficient using Jackknife Estimates

Investigating the Significance of a Correlation Coefficient using Jackknife Estimates Iteratioal Joural of Scieces: Basic ad Applied Research (IJSBAR) ISSN 2307-4531 (Prit & Olie) http://gssrr.org/idex.php?joural=jouralofbasicadapplied ---------------------------------------------------------------------------------------------------------------------------

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

The standard deviation of the mean

The standard deviation of the mean Physics 6C Fall 20 The stadard deviatio of the mea These otes provide some clarificatio o the distictio betwee the stadard deviatio ad the stadard deviatio of the mea.. The sample mea ad variace Cosider

More information

R. van Zyl 1, A.J. van der Merwe 2. Quintiles International, University of the Free State

R. van Zyl 1, A.J. van der Merwe 2. Quintiles International, University of the Free State Bayesia Cotrol Charts for the Two-parameter Expoetial Distributio if the Locatio Parameter Ca Take o Ay Value Betwee Mius Iity ad Plus Iity R. va Zyl, A.J. va der Merwe 2 Quitiles Iteratioal, ruaavz@gmail.com

More information

Modeling and Estimation of a Bivariate Pareto Distribution using the Principle of Maximum Entropy

Modeling and Estimation of a Bivariate Pareto Distribution using the Principle of Maximum Entropy Sri Laka Joural of Applied Statistics, Vol (5-3) Modelig ad Estimatio of a Bivariate Pareto Distributio usig the Priciple of Maximum Etropy Jagathath Krisha K.M. * Ecoomics Research Divisio, CSIR-Cetral

More information

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise First Year Quatitative Comp Exam Sprig, 2012 Istructio: There are three parts. Aswer every questio i every part. Questio I-1 Part I - 203A A radom variable X is distributed with the margial desity: >

More information

LECTURE 2 LEAST SQUARES CROSS-VALIDATION FOR KERNEL DENSITY ESTIMATION

LECTURE 2 LEAST SQUARES CROSS-VALIDATION FOR KERNEL DENSITY ESTIMATION Jauary 3 07 LECTURE LEAST SQUARES CROSS-VALIDATION FOR ERNEL DENSITY ESTIMATION Noparametric kerel estimatio is extremely sesitive to te coice of badwidt as larger values of result i averagig over more

More information

Estimation of the essential supremum of a regression function

Estimation of the essential supremum of a regression function Estimatio of the essetial supremum of a regressio fuctio Michael ohler, Adam rzyżak 2, ad Harro Walk 3 Fachbereich Mathematik, Techische Uiversität Darmstadt, Schlossgartestr. 7, 64289 Darmstadt, Germay,

More information

1 Introduction to reducing variance in Monte Carlo simulations

1 Introduction to reducing variance in Monte Carlo simulations Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by

More information

Exponential Families and Bayesian Inference

Exponential Families and Bayesian Inference Computer Visio Expoetial Families ad Bayesia Iferece Lecture Expoetial Families A expoetial family of distributios is a d-parameter family f(x; havig the followig form: f(x; = h(xe g(t T (x B(, (. where

More information

Maximum likelihood estimation from record-breaking data for the generalized Pareto distribution

Maximum likelihood estimation from record-breaking data for the generalized Pareto distribution METRON - Iteratioal Joural of Statistics 004, vol. LXII,. 3, pp. 377-389 NAGI S. ABD-EL-HAKIM KHALAF S. SULTAN Maximum likelihood estimatio from record-breakig data for the geeralized Pareto distributio

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

Empirical Process Theory and Oracle Inequalities

Empirical Process Theory and Oracle Inequalities Stat 928: Statistical Learig Theory Lecture: 10 Empirical Process Theory ad Oracle Iequalities Istructor: Sham Kakade 1 Risk vs Risk See Lecture 0 for a discussio o termiology. 2 The Uio Boud / Boferoi

More information

Lecture 24: Variable selection in linear models

Lecture 24: Variable selection in linear models Lecture 24: Variable selectio i liear models Cosider liear model X = Z β + ε, β R p ad Varε = σ 2 I. Like the LSE, the ridge regressio estimator does ot give 0 estimate to a compoet of β eve if that compoet

More information

Double Stage Shrinkage Estimator of Two Parameters. Generalized Exponential Distribution

Double Stage Shrinkage Estimator of Two Parameters. Generalized Exponential Distribution Iteratioal Mathematical Forum, Vol., 3, o. 3, 3-53 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.9/imf.3.335 Double Stage Shrikage Estimator of Two Parameters Geeralized Expoetial Distributio Alaa M.

More information

A Note on Effi cient Conditional Simulation of Gaussian Distributions. April 2010

A Note on Effi cient Conditional Simulation of Gaussian Distributions. April 2010 A Note o Effi ciet Coditioal Simulatio of Gaussia Distributios A D D C S S, U B C, V, BC, C April 2010 A Cosider a multivariate Gaussia radom vector which ca be partitioed ito observed ad uobserved compoetswe

More information

Rank tests and regression rank scores tests in measurement error models

Rank tests and regression rank scores tests in measurement error models Rak tests ad regressio rak scores tests i measuremet error models J. Jurečková ad A.K.Md.E. Saleh Charles Uiversity i Prague ad Carleto Uiversity i Ottawa Abstract The rak ad regressio rak score tests

More information

Monte Carlo Integration

Monte Carlo Integration Mote Carlo Itegratio I these otes we first review basic umerical itegratio methods (usig Riema approximatio ad the trapezoidal rule) ad their limitatios for evaluatig multidimesioal itegrals. Next we itroduce

More information

TESTING FOR THE BUFFERED AUTOREGRESSIVE PROCESSES (SUPPLEMENTARY MATERIAL)

TESTING FOR THE BUFFERED AUTOREGRESSIVE PROCESSES (SUPPLEMENTARY MATERIAL) TESTING FOR THE BUFFERED AUTOREGRESSIVE PROCESSES SUPPLEMENTARY MATERIAL) By Ke Zhu, Philip L.H. Yu ad Wai Keug Li Chiese Academy of Scieces ad Uiversity of Hog Kog APPENDIX: PROOFS I this appedix, we

More information

A Note on the Kolmogorov-Feller Weak Law of Large Numbers

A Note on the Kolmogorov-Feller Weak Law of Large Numbers Joural of Mathematical Research with Applicatios Mar., 015, Vol. 35, No., pp. 3 8 DOI:10.3770/j.iss:095-651.015.0.013 Http://jmre.dlut.edu.c A Note o the Kolmogorov-Feller Weak Law of Large Numbers Yachu

More information

Introducing a Novel Bivariate Generalized Skew-Symmetric Normal Distribution

Introducing a Novel Bivariate Generalized Skew-Symmetric Normal Distribution Joural of mathematics ad computer Sciece 7 (03) 66-7 Article history: Received April 03 Accepted May 03 Available olie Jue 03 Itroducig a Novel Bivariate Geeralized Skew-Symmetric Normal Distributio Behrouz

More information

8.1 Introduction. 8. Nonparametric Inference Using Orthogonal Functions

8.1 Introduction. 8. Nonparametric Inference Using Orthogonal Functions 8. Noparametric Iferece Usig Orthogoal Fuctios 1. Itroductio. Noparametric Regressio 3. Irregular Desigs 4. Desity Estimatio 5. Compariso of Methods 8.1 Itroductio Use a orthogoal basis to covert oparametric

More information

Information-based Feature Selection

Information-based Feature Selection Iformatio-based Feature Selectio Farza Faria, Abbas Kazeroui, Afshi Babveyh Email: {faria,abbask,afshib}@staford.edu 1 Itroductio Feature selectio is a topic of great iterest i applicatios dealig with

More information

Confidence interval for the two-parameter exponentiated Gumbel distribution based on record values

Confidence interval for the two-parameter exponentiated Gumbel distribution based on record values Iteratioal Joural of Applied Operatioal Research Vol. 4 No. 1 pp. 61-68 Witer 2014 Joural homepage: www.ijorlu.ir Cofidece iterval for the two-parameter expoetiated Gumbel distributio based o record values

More information

SAMPLING LIPSCHITZ CONTINUOUS DENSITIES. 1. Introduction

SAMPLING LIPSCHITZ CONTINUOUS DENSITIES. 1. Introduction SAMPLING LIPSCHITZ CONTINUOUS DENSITIES OLIVIER BINETTE Abstract. A simple ad efficiet algorithm for geeratig radom variates from the class of Lipschitz cotiuous desities is described. A MatLab implemetatio

More information

1 of 7 7/16/2009 6:06 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 6. Order Statistics Defiitios Suppose agai that we have a basic radom experimet, ad that X is a real-valued radom variable

More information

Application to Random Graphs

Application to Random Graphs A Applicatio to Radom Graphs Brachig processes have a umber of iterestig ad importat applicatios. We shall cosider oe of the most famous of them, the Erdős-Réyi radom graph theory. 1 Defiitio A.1. Let

More information

EDGEWORTH SIZE CORRECTED W, LR AND LM TESTS IN THE FORMATION OF THE PRELIMINARY TEST ESTIMATOR

EDGEWORTH SIZE CORRECTED W, LR AND LM TESTS IN THE FORMATION OF THE PRELIMINARY TEST ESTIMATOR Joural of Statistical Research 26, Vol. 37, No. 2, pp. 43-55 Bagladesh ISSN 256-422 X EDGEORTH SIZE CORRECTED, AND TESTS IN THE FORMATION OF THE PRELIMINARY TEST ESTIMATOR Zahirul Hoque Departmet of Statistics

More information

32 estimating the cumulative distribution function

32 estimating the cumulative distribution function 32 estimatig the cumulative distributio fuctio 4.6 types of cofidece itervals/bads Let F be a class of distributio fuctios F ad let θ be some quatity of iterest, such as the mea of F or the whole fuctio

More information

Local Polynomial Regression

Local Polynomial Regression Local Polyomial Regressio Joh Hughes October 2, 2013 Recall that the oparametric regressio model is Y i f x i ) + ε i, where f is the regressio fuctio ad the ε i are errors such that Eε i 0. The Nadaraya-Watso

More information

MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE

MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE Vol. 8 o. Joural of Systems Sciece ad Complexity Apr., 5 MOMET-METHOD ESTIMATIO BASED O CESORED SAMPLE I Zhogxi Departmet of Mathematics, East Chia Uiversity of Sciece ad Techology, Shaghai 37, Chia. Email:

More information

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain Assigmet 9 Exercise 5.5 Let X biomial, p, where p 0, 1 is ukow. Obtai cofidece itervals for p i two differet ways: a Sice X / p d N0, p1 p], the variace of the limitig distributio depeds oly o p. Use the

More information

Goodness-Of-Fit For The Generalized Exponential Distribution. Abstract

Goodness-Of-Fit For The Generalized Exponential Distribution. Abstract Goodess-Of-Fit For The Geeralized Expoetial Distributio By Amal S. Hassa stitute of Statistical Studies & Research Cairo Uiversity Abstract Recetly a ew distributio called geeralized expoetial or expoetiated

More information

Dimension-free PAC-Bayesian bounds for the estimation of the mean of a random vector

Dimension-free PAC-Bayesian bounds for the estimation of the mean of a random vector Dimesio-free PAC-Bayesia bouds for the estimatio of the mea of a radom vector Olivier Catoi CREST CNRS UMR 9194 Uiversité Paris Saclay olivier.catoi@esae.fr Ilaria Giulii Laboratoire de Probabilités et

More information

Appendix to Quicksort Asymptotics

Appendix to Quicksort Asymptotics Appedix to Quicksort Asymptotics James Alle Fill Departmet of Mathematical Scieces The Johs Hopkis Uiversity jimfill@jhu.edu ad http://www.mts.jhu.edu/~fill/ ad Svate Jaso Departmet of Mathematics Uppsala

More information

[412] A TEST FOR HOMOGENEITY OF THE MARGINAL DISTRIBUTIONS IN A TWO-WAY CLASSIFICATION

[412] A TEST FOR HOMOGENEITY OF THE MARGINAL DISTRIBUTIONS IN A TWO-WAY CLASSIFICATION [412] A TEST FOR HOMOGENEITY OF THE MARGINAL DISTRIBUTIONS IN A TWO-WAY CLASSIFICATION BY ALAN STUART Divisio of Research Techiques, Lodo School of Ecoomics 1. INTRODUCTION There are several circumstaces

More information

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator Slide Set 13 Liear Model with Edogeous Regressors ad the GMM estimator Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Friday

More information

Summary and Discussion on Simultaneous Analysis of Lasso and Dantzig Selector

Summary and Discussion on Simultaneous Analysis of Lasso and Dantzig Selector Summary ad Discussio o Simultaeous Aalysis of Lasso ad Datzig Selector STAT732, Sprig 28 Duzhe Wag May 4, 28 Abstract This is a discussio o the work i Bickel, Ritov ad Tsybakov (29). We begi with a short

More information

Properties and Hypothesis Testing

Properties and Hypothesis Testing Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.

More information

A survey on penalized empirical risk minimization Sara A. van de Geer

A survey on penalized empirical risk minimization Sara A. van de Geer A survey o pealized empirical risk miimizatio Sara A. va de Geer We address the questio how to choose the pealty i empirical risk miimizatio. Roughly speakig, this pealty should be a good boud for the

More information

Asymptotic distribution of products of sums of independent random variables

Asymptotic distribution of products of sums of independent random variables Proc. Idia Acad. Sci. Math. Sci. Vol. 3, No., May 03, pp. 83 9. c Idia Academy of Scieces Asymptotic distributio of products of sums of idepedet radom variables YANLING WANG, SUXIA YAO ad HONGXIA DU ollege

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

A Risk Comparison of Ordinary Least Squares vs Ridge Regression

A Risk Comparison of Ordinary Least Squares vs Ridge Regression Joural of Machie Learig Research 14 (2013) 1505-1511 Submitted 5/12; Revised 3/13; Published 6/13 A Risk Compariso of Ordiary Least Squares vs Ridge Regressio Paramveer S. Dhillo Departmet of Computer

More information

A goodness-of-fit test based on the empirical characteristic function and a comparison of tests for normality

A goodness-of-fit test based on the empirical characteristic function and a comparison of tests for normality A goodess-of-fit test based o the empirical characteristic fuctio ad a compariso of tests for ormality J. Marti va Zyl Departmet of Mathematical Statistics ad Actuarial Sciece, Uiversity of the Free State,

More information

Lecture 11 October 27

Lecture 11 October 27 STATS 300A: Theory of Statistics Fall 205 Lecture October 27 Lecturer: Lester Mackey Scribe: Viswajith Veugopal, Vivek Bagaria, Steve Yadlowsky Warig: These otes may cotai factual ad/or typographic errors..

More information

THE EFFICIENCY OF BIAS CORRECTED ESTIMATORS FOR NONPARAMETRIC KERNEL ESTIMATION BASED ON LOCAL ESTIMATING EQUATIONS

THE EFFICIENCY OF BIAS CORRECTED ESTIMATORS FOR NONPARAMETRIC KERNEL ESTIMATION BASED ON LOCAL ESTIMATING EQUATIONS THE EFFICIENCY OF BIAS CORRECTED ESTIMATORS FOR NONPARAMETRIC KERNEL ESTIMATION BASED ON LOCAL ESTIMATING EQUATIONS Göra Kauerma, Marlee Müller ad Raymod J. Carroll April 18, 1998 Abstract Stuetzle ad

More information

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15 17. Joit distributios of extreme order statistics Lehma 5.1; Ferguso 15 I Example 10., we derived the asymptotic distributio of the maximum from a radom sample from a uiform distributio. We did this usig

More information

Adaptive Estimation of Density Function Derivative

Adaptive Estimation of Density Function Derivative Applied Methods of Statistical Aalysis. Noparametric Approach Adaptive Estimatio of Desity Fuctio Derivative Dimitris N. Politis, Vyacheslav A. Vasiliev 2 ad Peter F. Tarasseko 2 Departmet of Mathematics,

More information

Sequential Monte Carlo Methods - A Review. Arnaud Doucet. Engineering Department, Cambridge University, UK

Sequential Monte Carlo Methods - A Review. Arnaud Doucet. Engineering Department, Cambridge University, UK Sequetial Mote Carlo Methods - A Review Araud Doucet Egieerig Departmet, Cambridge Uiversity, UK http://www-sigproc.eg.cam.ac.uk/ ad2/araud doucet.html ad2@eg.cam.ac.uk Istitut Heri Poicaré - Paris - 2

More information

G. R. Pasha Department of Statistics Bahauddin Zakariya University Multan, Pakistan

G. R. Pasha Department of Statistics Bahauddin Zakariya University Multan, Pakistan Deviatio of the Variaces of Classical Estimators ad Negative Iteger Momet Estimator from Miimum Variace Boud with Referece to Maxwell Distributio G. R. Pasha Departmet of Statistics Bahauddi Zakariya Uiversity

More information

Kolmogorov-Smirnov type Tests for Local Gaussianity in High-Frequency Data

Kolmogorov-Smirnov type Tests for Local Gaussianity in High-Frequency Data Proceedigs 59th ISI World Statistics Cogress, 5-30 August 013, Hog Kog (Sessio STS046) p.09 Kolmogorov-Smirov type Tests for Local Gaussiaity i High-Frequecy Data George Tauche, Duke Uiversity Viktor Todorov,

More information

4.5 Multiple Imputation

4.5 Multiple Imputation 45 ultiple Imputatio Itroductio Assume a parametric model: y fy x; θ We are iterested i makig iferece about θ I Bayesia approach, we wat to make iferece about θ from fθ x, y = πθfy x, θ πθfy x, θdθ where

More information

Regression with quadratic loss

Regression with quadratic loss Regressio with quadratic loss Maxim Ragisky October 13, 2015 Regressio with quadratic loss is aother basic problem studied i statistical learig theory. We have a radom couple Z = X,Y, where, as before,

More information

UNIFORM CONVERGENCE RATES FOR KERNEL ESTIMATION WITH DEPENDENT DATA

UNIFORM CONVERGENCE RATES FOR KERNEL ESTIMATION WITH DEPENDENT DATA Ecoometric Teory, 24, 2008, 726 748+ Prited i te Uited States of America+ doi: 10+10170S0266466608080304 UNIFORM CONVERGENCE RATES FOR KERNEL ESTIMATION WITH DEPENDENT DATA BRUCE E. HANSEN Uiversity of

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2 82 CHAPTER 4. MAXIMUM IKEIHOOD ESTIMATION Defiitio: et X be a radom sample with joit p.m/d.f. f X x θ. The geeralised likelihood ratio test g.l.r.t. of the NH : θ H 0 agaist the alterative AH : θ H 1,

More information

On the convergence rates of Gladyshev s Hurst index estimator

On the convergence rates of Gladyshev s Hurst index estimator Noliear Aalysis: Modellig ad Cotrol, 2010, Vol 15, No 4, 445 450 O the covergece rates of Gladyshev s Hurst idex estimator K Kubilius 1, D Melichov 2 1 Istitute of Mathematics ad Iformatics, Vilius Uiversity

More information

Estimation for Complete Data

Estimation for Complete Data Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of

More information

10. Comparative Tests among Spatial Regression Models. Here we revisit the example in Section 8.1 of estimating the mean of a normal random

10. Comparative Tests among Spatial Regression Models. Here we revisit the example in Section 8.1 of estimating the mean of a normal random Part III. Areal Data Aalysis 0. Comparative Tests amog Spatial Regressio Models While the otio of relative likelihood values for differet models is somewhat difficult to iterpret directly (as metioed above),

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week Lecture: Cocept Check Exercises Starred problems are optioal. Statistical Learig Theory. Suppose A = Y = R ad X is some other set. Furthermore, assume P X Y is a discrete

More information

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization ECE 90 Lecture 4: Maximum Likelihood Estimatio ad Complexity Regularizatio R Nowak 5/7/009 Review : Maximum Likelihood Estimatio We have iid observatios draw from a ukow distributio Y i iid p θ, i,, where

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

APPENDIX A SMO ALGORITHM

APPENDIX A SMO ALGORITHM AENDIX A SMO ALGORITHM Sequetial Miimal Optimizatio SMO) is a simple algorithm that ca quickly solve the SVM Q problem without ay extra matrix storage ad without usig time-cosumig umerical Q optimizatio

More information

Lecture 7: Properties of Random Samples

Lecture 7: Properties of Random Samples Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ

More information

Berry-Esseen bounds for self-normalized martingales

Berry-Esseen bounds for self-normalized martingales Berry-Essee bouds for self-ormalized martigales Xiequa Fa a, Qi-Ma Shao b a Ceter for Applied Mathematics, Tiaji Uiversity, Tiaji 30007, Chia b Departmet of Statistics, The Chiese Uiversity of Hog Kog,

More information

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series

Comparison Study of Series Approximation. and Convergence between Chebyshev. and Legendre Series Applied Mathematical Scieces, Vol. 7, 03, o. 6, 3-337 HIKARI Ltd, www.m-hikari.com http://d.doi.org/0.988/ams.03.3430 Compariso Study of Series Approimatio ad Covergece betwee Chebyshev ad Legedre Series

More information

Estimation of Population Mean Using Co-Efficient of Variation and Median of an Auxiliary Variable

Estimation of Population Mean Using Co-Efficient of Variation and Median of an Auxiliary Variable Iteratioal Joural of Probability ad Statistics 01, 1(4: 111-118 DOI: 10.593/j.ijps.010104.04 Estimatio of Populatio Mea Usig Co-Efficiet of Variatio ad Media of a Auxiliary Variable J. Subramai *, G. Kumarapadiya

More information

UNIFORM INTERVAL ESTIMATION FOR AN AR(1) PROCESS WITH AR ERRORS

UNIFORM INTERVAL ESTIMATION FOR AN AR(1) PROCESS WITH AR ERRORS Statistica Siica 6 (06), 9-36 doi:http://dx.doi.org/0.5705/ss.04.5 UNIFORM INTERVAL ESTIMATION FOR AN AR() PROCESS WITH AR ERRORS Joatha Hill, Deyua Li ad Liag Peg Uiversity of North Carolia, Fuda Uiversity

More information

ON POINTWISE BINOMIAL APPROXIMATION

ON POINTWISE BINOMIAL APPROXIMATION Iteratioal Joural of Pure ad Applied Mathematics Volume 71 No. 1 2011, 57-66 ON POINTWISE BINOMIAL APPROXIMATION BY w-functions K. Teerapabolar 1, P. Wogkasem 2 Departmet of Mathematics Faculty of Sciece

More information

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i

More information

Dirichlet s Theorem on Arithmetic Progressions

Dirichlet s Theorem on Arithmetic Progressions Dirichlet s Theorem o Arithmetic Progressios Athoy Várilly Harvard Uiversity, Cambridge, MA 0238 Itroductio Dirichlet s theorem o arithmetic progressios is a gem of umber theory. A great part of its beauty

More information

11 Correlation and Regression

11 Correlation and Regression 11 Correlatio Regressio 11.1 Multivariate Data Ofte we look at data where several variables are recorded for the same idividuals or samplig uits. For example, at a coastal weather statio, we might record

More information

Efficient GMM LECTURE 12 GMM II

Efficient GMM LECTURE 12 GMM II DECEMBER 1 010 LECTURE 1 II Efficiet The estimator depeds o the choice of the weight matrix A. The efficiet estimator is the oe that has the smallest asymptotic variace amog all estimators defied by differet

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

THE SYSTEMATIC AND THE RANDOM. ERRORS - DUE TO ELEMENT TOLERANCES OF ELECTRICAL NETWORKS

THE SYSTEMATIC AND THE RANDOM. ERRORS - DUE TO ELEMENT TOLERANCES OF ELECTRICAL NETWORKS R775 Philips Res. Repts 26,414-423, 1971' THE SYSTEMATIC AND THE RANDOM. ERRORS - DUE TO ELEMENT TOLERANCES OF ELECTRICAL NETWORKS by H. W. HANNEMAN Abstract Usig the law of propagatio of errors, approximated

More information