THE EFFICIENCY OF BIAS CORRECTED ESTIMATORS FOR NONPARAMETRIC KERNEL ESTIMATION BASED ON LOCAL ESTIMATING EQUATIONS

Size: px
Start display at page:

Download "THE EFFICIENCY OF BIAS CORRECTED ESTIMATORS FOR NONPARAMETRIC KERNEL ESTIMATION BASED ON LOCAL ESTIMATING EQUATIONS"

Transcription

1 THE EFFICIENCY OF BIAS CORRECTED ESTIMATORS FOR NONPARAMETRIC KERNEL ESTIMATION BASED ON LOCAL ESTIMATING EQUATIONS Göra Kauerma, Marlee Müller ad Raymod J. Carroll April 18, 1998 Abstract Stuetzle ad Mittal (1979) for ordiary oparametric kerel regressio ad Kauerma ad Tutz (1996) for oparametric geeralized liear model kerel regressio costructed estimators with lower order bias tha the usual estimators, without the eed for devices such as secod derivative estimatio ad multiple badwidths of differet order. We derive a similar estimator i the cotext of local (multivariate) estimatio based o estimatig fuctios. As expected, this lower order bias is bought at a cost of icreased variace. Surprisigly, whe compared to ordiary kerel ad local liear kerel estimators, the bias corrected estimators icrease variace by a factor idepedet of the problem, depedig oly o the kerel used. The variace icrease is approximately 40% ad more for kerels i stadard use. However, the variace icrease is still less tha that icurred whe udersmoothig a local quadratic regressio estimator. Key words ad phrases: Bias Reductio; Bootstrap; Estimatig Equatios; Geeralized Liear Models; Local Liear Regressio; Noparametric Regressio. Short title. Proportioal Variace Iflatio for Bias Reductio Göra Kauerma is Wisseschaftlicher Assistet, Fachgebiet Statistik ud Wirtschaftsmathematik, Techische Uiversität Berli, Fraklistraße 28/29, D Berli. Marlee Müller is Wisseschaftliche Assisteti, Istitut für Statistik ud Ökoometrie, Humboldt Uiversität zu Berli, Spadauerstraße 1, D Berli. Raymod J. Carroll is Professor of Statistics, Nutritio ad Toxicology, Departmet of Statistics, Texas A&M Uiversity, College Statio, TX Carroll s research was supported by a grat from the Natioal Cacer Istitute (CA 57030), ad was partially completed while visitig the Istitut für Statistik ud Ökoometrie, Soderforschugsbereich 373, Humboldt Uiversität zu Berli, with partial support from a seior Alexader vo Humboldt Foudatio research award.

2 1 INTRODUCTION Noparametric fuctio estimatio is greatly complicated by the problem of bias, ad this has importat cosequeces for iferetial methods such as cofidece bads ad test statistics. For example, cosider the problem of ordiary oparametric regressio to estimate the regressio fuctio Θ(x 0 ) of a respose Y o a predictor X evaluated at a value x 0 :Θ(x 0 )=E(Y X=x 0 ). Local liear regressio estimators are solutios to the locally weighted estimatig equatio 0= w i (x 0 ) Y i β 0 β 1 (X i x 0 )} (1,X i x 0 ) T, (1) where the itercept is the regressio estimate Θ(x 0,h) ad the weights w i (x 0 ) icrease with the distace of X i to x 0. Cosider the case that the weights are kerel weights K h (X i x 0 ) = h 1 K(X i x 0 )/h} with a symmetric kerel desity fuctio K( ). Let Θ (2) (x) be the secod derivative of the fuctio Θ(x), let f x (x) be the desity fuctio of X ad ad suppose that the variace of Y give X is costat ad equal to σ 2. The it is well kow (Fa & Gijbels, 1996 is a coveiet referece) that if x 0 is iterior to the support of X, the bias ad the variace are give approximately by } bias Θ(x 0,h) } var Θ(x 0,h) (1/2)h 2 Θ (2) (x 0 ) σ 2 hf x (x 0 )} 1 z 2 K(z)dz; (2) K 2 (z)dz. (3) Results similar to (2) (3) hold for ordiary kerel regressio, geeralized liear models (Fa, Heckma & Wad, 1995) ad more geerally for local estimatig equatios (Carroll, Ruppert & Welsh, 1998), see sectio 2 for defiitios. I practice, oe must estimate the badwidth, ad this is usually doe by miimizig a estimate of the mea squared error based o (2) (3), see Ruppert (1998) for review. For these badwidth estimators, for which h is proportioal to 1/5, the squared bias ad variace are set equal. The et effect is that while the estimators are optimal i a mea squared error sese, the (squared) bias is approximately the same as the variace. Thus, while optimal badwidth estimators give good fuctio estimates, the fact that their squared bias approximately equals the variace has importat implicatios for ifereces. example, if oe igores the bias, cofidece itervals for the regressio fuctio at a poit x 0 have coverage levels asymptotically smaller tha the omial. To overcome this problem, a stadard techique is to estimate the secod derivative fuctio Θ (2) (x 0 ), ad the subtract the estimated bias from Θ 0 (x 0,h), a techique employed either effectively or explicitly by Härdle & Bowma 1 For

3 (1988), Härdle & Marro (1991), Eubak & Speckma (1993) ad Hall (1993), amog others. The result is to remove the bias, ad asymptotically ot chage the variace, because the h 2 i (2) is of small order, ad ay error i estimatig the secod derivative is of eve lower asymptotic order. There are a few difficulties with this geeral approach. First, all the above refereced papers iclude the eed for a secod badwidth, which we will call g. The secod badwidth is eeded effectively to estimate the secod derivative fuctio ad hece ecessarily coverges to zero more slowly tha the mai badwidth h. The secod badwidth g must be estimated as well, ad this if usually much harder to accomplish tha estimatig h itself. Secod, outside the cotext of estimatig a mea fuctio (Kauerma & Tutz, 1996; Carroll, Ruppert & Welsh, 1998), while it is possible to work out formulae similar to (2) (3), it is either clear how best to estimate the vector of secod derivative fuctios (ad their secod badwidths) or whether such estimatio leads to reasoable bias reductio properties i small samples. To address these cocers, Kauerma & Tutz suggest a alterative method of bias reductio. While the method is give i its estimatig equatio form i sectio 2, here we derive it i ordiary kerel regressio. The estimated regressio fuctio has the algebraic expressio Θ(x 0,h)= 1 w i (x 0 )Y i / 1 w i (x 0 ), ad from this oe deduces that Θ(x 0,h) Θ(x 0 )=b +E = w i (x 0 ) Θ(X i ) Θ(x 0 )} w i (x 0 ) + w i (x 0 ) Y i Θ(X i )}. w i (x 0 ) The term b determies the bias, while the term E is a mea zero radom variable which determies the variace. The simplest device to estimate bias is to plug i Θ( ) forθ( )ib, leadig to the bias corrected estimator Θ c (x 0,h) = Θ(x 0,h) w i (x 0 ) Θ(X i ) Θ(x } 0 ) / w i (x 0 ) (4) = w i (x 0 ) 2 Θ(x 0 ) Θ(X } i ) / w i (x 0 ). The estimator (4) is a bias corrected estimator which was called the twicig estimator by Stuetzle ad Mittal (1979): it has bias of lower order tha the usual kerel estimator. discuss the geeralizatio of this estimator to the estimatig equatio cotext. I sectio 2 we Oe would expect that bias corrected estimators such as (4) should have larger variace tha the ordiary estimator. We cosider this questio for estimatig equatios usig local average ad local liear kerel methods with symmetric kerel K( ). Our coclusio is surprisig: idepedet of the problem, bias corrected estimators are more variable by a factor depedig oly o the 2

4 kerel, amely c(k) = K(z2 )K(z 3 )2K(z 1 ) K(z 2 z 1 )}2K(z 1 ) K(z 3 z 1 )dz 1 dz 2 dz 3 } K 2. (5) (z)dz For the Gaussia kerel, c(gaussia) = 1.44, while for the Epaechikov kerel, c(epaechikov) = A alterative device to remove bias is direct udersmoothig, e.g., usig a local quadratic regressio with a badwidth h 1/5 from local liear regressio. The icrease i the variace, d(k) say, from this ca be computed from results of Ruppert & Wad (1994). For the Gaussia kerel, d(gaussia) = 1.69, while for the Epaechikov kerel, d(epaechikov) = variace icreases are larger tha for the bias corrected estimators (4). 2 LOCAL ESTIMATING EQUATIONS AND ESTIMATES OF BIAS I parametric problems, estimatio of a possibly vector valued parameter Θ is typically based o a ubiased estimatig fuctio ψ( ), so that if the data are geerically deoted by Ỹi (i =1,..., ), the Θ is the solutio to the estimatig equatio 0= 1 ψ(ỹi, Θ). By a ubiased estimatig fuctio, we mea that Eψ(Ỹ,Θ) = 0. The choices of ψ( ) arewell kow. For example, whe the data Both Ỹ cosist oly of a respose Y adθisthemeaofy, ψ(ỹ,θ) = Y Θ. I geeralized liear models, the data Ỹ cosist of a respose Y ad covariates Z. The mea is µ(z T Θ), the variace is proportioal to V (Z T Θ), ad the estimatig fuctio } is the quasilikelihood score ψ(ỹ, Θ) = Zµ(1) (Z T Θ) Y µ(z T Θ) /V (Z T Θ), where µ (1) ( ) isthe first derivative of the fuctio µ( ). Noparametric regressio ca be thought of as a varyig coefficiet model (Kauerma & Tutz, 1996), where the coefficiet Θ varies with a covariate X. A estimate of Θ(x 0 ) ca be obtaied usig local polyomials of order p 0 as follows. Defie G p (v) =(1, v,..., v p ) T, ad let the weights w i (x 0 ) be as i Sectio 1. Suppose that Θ(x) is a vector of legth q. Let is the Kroecker product, so that for example (a, c) (b1,b2,b3) = (ab 1,ab 2,ab 3,cb 1,cb 2,cb 3 ). Defie B T =(β0 T,..., βt p ). The Θ(x 0 ) is the itercept β 0 i the solutio to the equatio 0= 1 } ] w i (x 0 )G p (X i x 0 ) ψ [Ỹi, G T p (X i x 0 ) I q B. (6) 3

5 For these local polyomial estimators, formulae similar to (2) (3) hold. I fact, if p = 1, the bias is still give by (2), while the variace is the same as (3) except that σ 2 is replaced by } g 1 (x)l(x)g T (x), where g T ( ) is the traspose of g 1 ( ), χ(ỹ, Θ) = ( / Θ)ψ(Ỹ, Θ), g(x) = E[χỸ,Θ(X)} X = x] adl(x)=e[ψỹ,θ(x)}ψt Ỹ, Θ(X)} X = x]. A bias corrected estimator is costructed as follows. Defie B (x 0 )= 1 The, by a Taylor series expasio, [ }] w i (x 0 ) G p (X i x 0 )G T p (X i x 0 ) χ Ỹi, Θ(X i ). Θ(x 0 ) Θ(x 0 ) b + E, (7) where E = e p B 1 (x 0 ) 1 } w i (x 0 )G p (X i x 0 ) ψ Ỹi, Θ(X i ) b = e p B 1 0) 1 ( } ] }) w i (x 0 )G p (X i x 0 ) ψ [Ỹi, G T p (X i x 0 ) I q B ψ Ỹi, Θ(X i ). with e p be the q q(p + 1) matrix of zeros except that the first q q submatrix is the idetity matrix. Just as i (4), the idea is to estimate the terms i the bias, leadig to the estimator Θ c (x 0 ) = Θ(x 1 0 ) e p B (x 0)C (x 0 ); (8) C (x 0 ) = 1 ( } ] w i (x 0 )G p (X i x 0 ) ψ [Ỹi, G T p (X i x 0 ) I q B ψ Ỹi, Θ(X }) i ) ; B (x 0 ) = 1 [ w i (x 0 ) G p (X i x 0 )G T p (X i x 0 ) χ Ỹi, Θ(X }] i ). I the case of the likelihood score ψ( ) with local averages, p =0,G p (v) = 1 ad the bias corrected estimator derived from (8) reproduces the bias corrected estimator of Kauerma & Tutz. I the appedix, we show that for local averages (p = 0) ad local liear smoothig (p =1), with badwidth h 1/5 the bias corrected estimator is always more variable asymptotically tha the ucorrected estimator (6) by the factor (5). 3 DISCUSSION There are two geeral ways to correct for bias i oparametric regressio: (a) estimate the secod derivative fuctio directly ad subtract a multiple of it from the usual estimator; ad (b) bias correct idirectly either by udersmoothig (applyig a local liear badwidth to a local quadratic 4

6 estimator) or the twicig techique. The major difficulty with method (a) is the eed for a secod badwidth. We have show that methods (b) are more variable tha method (a), by a costat factor idepedet of the problem. Betwee the two possibilities i method (b), the twicig estimator is asymptotically less variable. The twicig estimator shows aother advatage with respect to applicatio. If the bias corrected estimators (4) ad (8) are used, it is simple to estimate their variace: take ay variace estimator for a ucorrected regressio fuctio which is already typically available i the literature, ad multiply it by the variace iflatio factor (5). REFERENCES Carroll, R. J., Ruppert, D. & Welsh, A. (1998). Noparametric estimatio via local estimatig equatios, with applicatios to utritio calibratio. Joural of the America Statistical Associatio, to appear. Eubak, R. L. & Speckma, P. L. (1993). Cofidece bads i oparametric regressio. Joural of the America Statistical Associatio, 88, Fa, J., Heckma, N. E. & Wad, M. P. (1995). Local polyomial kerel regressio for geeralized liear models. Joural of the America Statistical Associatio, 90, Fa, J. & Gijbels, I. (1996): Local Polyomial Modelig ad its Applicatios. Lodo: Chapma & Hall. Hall, P. (1993). O Edgeworth expasio ad bootstrap cofidece bads i oparametric curve estimatio. Joural of the Royal Statistical Society, Series B, 55, Härdle, W. & Bowma, A. W. (1988). Bootstrappig oparametric regressio: local adaptive smoothig ad cofidece bads. Joural of the America Statistical Associatio, 83, Härdle, W. & Marro, J. S. (1991). Bootstrap simultaeous error bars for oparametric regressio. Aals of Statistics, 19, Kauerma, G. & Tutz, G. (1996). O model diagostics ad bootstrappig i varyig coefficiet models. Submitted. Ruppert, D. (1998). Empirical bias badwidth selectio. Joural of the America Statistical Associatio, to appear. Ruppert, D., & Wad, M. P. (1994). Multivariate locally weighted least squares regressio. Aals of Statistics, 22, Stuetzle, W. & Mittal, Y. (1979). Some commets o the asymptotic behavior of robust smoothers. I Smoothig Techiques for Curve Estimatio, editors T. Gasser ad M. Roseblatt, Spriger Lecture Notes, 757,

7 Appedix A PROOFS OF THEOREMS I what follows, we will assume that h 1/5, ad we will use the otatio to mea equality to terms of order o p (h 2 ). Recall that f x ( ) is the desity of X. The kerel fuctio K( ) is symmetric. Defie g(x) =E[χỸ,Θ(X)} X = x] adl(x)=e[ψỹ,θ(x)}ψt Ỹ, Θ(X)} X = x]. Our argumet here is heuristic but ca be justified uder strog regularity coditios, e.g. X ad K are compactly supported, the desity of X is bouded away from zero o its support, etc. We provide details i the case of local averages, i.e., p = 0 i (6). The argumets are similar i the local liear case (p = 1) because the expasio (9) give below still holds i this case, but with (10) replaced by (2). Carroll, et al. (1998) show that Θ(x) Θ(x) C (x) h 2 r(x)+d (x), where (9) B (x) = 1 K h (X i x)χỹi, Θ(x)}; C (x) = B 1 (x) 1 K h (X i x)ψỹi, Θ(x)}; D (x) = B 1 (x) 1 K h (X i x)ψỹi, Θ(X i )}; r(x) = (1/2)Θ (2) (x)+f x (1) (x)θ(1) (x)/f x (x)} z 2 K(z)dz. (10) The variace of Θ(x) is approximately Σ=hf x (x)} 1 } K 2 (z)dz g 1 (x)l(x)g T (x), where g T ( ) is the traspose of g 1 ( ). Usig these expasios, we have that Θ c (x 0 ) Θ(x 0 ) B 1 (x 0 ) 1 [ K h (X i x 0 ) ψỹi, Θ(X i )} ψỹi, Θ(x 0 )} + χỹi, Θ(x 0 )} Θ(x ] 0 ) Θ(x 0 )} B 1 (x 0) 1 K h (X i x 0 )χỹi, Θ(x 0 )}[2 Θ(x 0 ) Θ(x 0 )} Θ(X i ) Θ(x 0 )}] +B 1 (x 0) 1 K h (X i x 0 )[ψỹi, Θ(X i )} ψỹi,θ(x 0 )}] B 1 (x 0 ) 1 K h (X i x 0 )χỹi, Θ(x 0 )}2D (x 0 ) D (X i )} +h 2 B 1 (x 0) 1 K h (X i x 0 )χỹi, Θ(x 0 )}2r(x 0 ) r(x i )} 6

8 +B 1 (x 0 ) 1 K h (X i x 0 )ψỹi, Θ(X i )} ψỹi,θ(x 0 )}} = G 1 + G 2 + G 3. It is easily show that G 2 h 2 r(x 0 ). Further, by calculatios of first ad secod momets, it follows that G 3 h 2 r(x 0 ). Fially, we have that B (x) =f x (x)g(x)+o p (h 2 ). From this, it follows that Θ c (x 0 ) Θ(x 0 ) f x (x 0 )g(x 0 )} 1 1 Now defie The it is easily see that L i (x) =f x (x)g(x)} 1 1 Θ c (x 0 ) Θ(x 0 ) f x (x 0 )g(x 0 )} 1 1 K h (X i x 0 )χỹi, Θ(x 0 )}2D (x 0 ) D (X i )}. j=1 K h (X j x)ψỹj, Θ(X j )} K h (X i x 0 )χỹi, Θ(x 0 )}2L i (x 0 ) L i (X i )}. (11) Now write out (11) ito a double sum i i ad j, iterchage the idices, elimiate the terms i which i ad j are equal sice they are of order (h) 1 = o p (h 2 ) ad use the assumptio that K( ) is symmetric to get that Θ c (x 0 ) Θ(x 0 ) f x (x 0 )g(x 0 )} 1 1 ψỹi, Θ(X i )}M (x 0,X i ), where (12) M (x 0,X i ) = 1 j=1,j i K h (X j x 0 )χỹj, Θ(x 0 )} 2Kh (X i x 0 ) f x (x 0 )g(x 0 ) K } h(x j X i ) f x (X j )g(x j ) Recall the defiitio of c(k) i (5). The right side of (12) is a mea zero radom variable, ad its variace is easily calculated to be c(k)σ1 + o(1)}, as claimed. 7

1 INTRODUCTION Noparametric fuctio estimatio is greatly complicated by the problem of bias, ad this has importat cosequeces for iferetial methods such

1 INTRODUCTION Noparametric fuctio estimatio is greatly complicated by the problem of bias, ad this has importat cosequeces for iferetial methods such THE EFFICIENCY OF BIAS{CORRECTED ESTIMATORS FOR NONPARAMETRIC KERNEL ESTIMATION BASED ON LOCAL ESTIMATING EQUATIONS Gora Kauerma, Marlee Muller ad Raymod J. Carroll March 17, 1997 Abstract Stuetzle ad

More information

1 Inferential Methods for Correlation and Regression Analysis

1 Inferential Methods for Correlation and Regression Analysis 1 Iferetial Methods for Correlatio ad Regressio Aalysis I the chapter o Correlatio ad Regressio Aalysis tools for describig bivariate cotiuous data were itroduced. The sample Pearso Correlatio Coefficiet

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Regression with an Evaporating Logarithmic Trend

Regression with an Evaporating Logarithmic Trend Regressio with a Evaporatig Logarithmic Tred Peter C. B. Phillips Cowles Foudatio, Yale Uiversity, Uiversity of Aucklad & Uiversity of York ad Yixiao Su Departmet of Ecoomics Yale Uiversity October 5,

More information

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

Lecture 33: Bootstrap

Lecture 33: Bootstrap Lecture 33: ootstrap Motivatio To evaluate ad compare differet estimators, we eed cosistet estimators of variaces or asymptotic variaces of estimators. This is also importat for hypothesis testig ad cofidece

More information

Lecture 3. Properties of Summary Statistics: Sampling Distribution

Lecture 3. Properties of Summary Statistics: Sampling Distribution Lecture 3 Properties of Summary Statistics: Samplig Distributio Mai Theme How ca we use math to justify that our umerical summaries from the sample are good summaries of the populatio? Lecture Summary

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise)

Lecture 22: Review for Exam 2. 1 Basic Model Assumptions (without Gaussian Noise) Lecture 22: Review for Exam 2 Basic Model Assumptios (without Gaussia Noise) We model oe cotiuous respose variable Y, as a liear fuctio of p umerical predictors, plus oise: Y = β 0 + β X +... β p X p +

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Properties and Hypothesis Testing

Properties and Hypothesis Testing Chapter 3 Properties ad Hypothesis Testig 3.1 Types of data The regressio techiques developed i previous chapters ca be applied to three differet kids of data. 1. Cross-sectioal data. 2. Time series data.

More information

Study the bias (due to the nite dimensional approximation) and variance of the estimators

Study the bias (due to the nite dimensional approximation) and variance of the estimators 2 Series Methods 2. Geeral Approach A model has parameters (; ) where is ite-dimesioal ad is oparametric. (Sometimes, there is o :) We will focus o regressio. The fuctio is approximated by a series a ite

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

It should be unbiased, or approximately unbiased. Variance of the variance estimator should be small. That is, the variance estimator is stable.

It should be unbiased, or approximately unbiased. Variance of the variance estimator should be small. That is, the variance estimator is stable. Chapter 10 Variace Estimatio 10.1 Itroductio Variace estimatio is a importat practical problem i survey samplig. Variace estimates are used i two purposes. Oe is the aalytic purpose such as costructig

More information

Lecture 7: Density Estimation: k-nearest Neighbor and Basis Approach

Lecture 7: Density Estimation: k-nearest Neighbor and Basis Approach STAT 425: Itroductio to Noparametric Statistics Witer 28 Lecture 7: Desity Estimatio: k-nearest Neighbor ad Basis Approach Istructor: Ye-Chi Che Referece: Sectio 8.4 of All of Noparametric Statistics.

More information

Kernel density estimator

Kernel density estimator Jauary, 07 NONPARAMETRIC ERNEL DENSITY ESTIMATION I this lecture, we discuss kerel estimatio of probability desity fuctios PDF Noparametric desity estimatio is oe of the cetral problems i statistics I

More information

Section 11.8: Power Series

Section 11.8: Power Series Sectio 11.8: Power Series 1. Power Series I this sectio, we cosider geeralizig the cocept of a series. Recall that a series is a ifiite sum of umbers a. We ca talk about whether or ot it coverges ad i

More information

1 Introduction to reducing variance in Monte Carlo simulations

1 Introduction to reducing variance in Monte Carlo simulations Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by

More information

THE DATA-BASED CHOICE OF BANDWIDTH FOR KERNEL QUANTILE ESTIMATOR OF VAR

THE DATA-BASED CHOICE OF BANDWIDTH FOR KERNEL QUANTILE ESTIMATOR OF VAR Iteratioal Joural of Iovative Maagemet, Iformatio & Productio ISME Iteratioal c2013 ISSN 2185-5439 Volume 4, Number 1, Jue 2013 PP. 17-24 THE DATA-BASED CHOICE OF BANDWIDTH FOR KERNEL QUANTILE ESTIMATOR

More information

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1. Eco 325/327 Notes o Sample Mea, Sample Proportio, Cetral Limit Theorem, Chi-square Distributio, Studet s t distributio 1 Sample Mea By Hiro Kasahara We cosider a radom sample from a populatio. Defiitio

More information

NONPARAMETRIC PART. Hua Liang. Institut fur Statistik und Okonometrie. Humboldt-Universitat zu Berlin. Abstract

NONPARAMETRIC PART. Hua Liang. Institut fur Statistik und Okonometrie. Humboldt-Universitat zu Berlin. Abstract ASYMPTOTIC NORMALITY OF PARAMETRIC PART IN PARTIALLY LINEAR MODELS WITH MEASUREMENT ERROR IN THE NONPARAMETRIC PART Hua Liag Istitut fur Statistik ud Okoometrie Humboldt-Uiversitat zu Berli D-10178 Berli,

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

Lecture 12: September 27

Lecture 12: September 27 36-705: Itermediate Statistics Fall 207 Lecturer: Siva Balakrisha Lecture 2: September 27 Today we will discuss sufficiecy i more detail ad the begi to discuss some geeral strategies for costructig estimators.

More information

11 Correlation and Regression

11 Correlation and Regression 11 Correlatio Regressio 11.1 Multivariate Data Ofte we look at data where several variables are recorded for the same idividuals or samplig uits. For example, at a coastal weather statio, we might record

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Local Polynomial Regression

Local Polynomial Regression Local Polyomial Regressio Joh Hughes October 2, 2013 Recall that the oparametric regressio model is Y i f x i ) + ε i, where f is the regressio fuctio ad the ε i are errors such that Eε i 0. The Nadaraya-Watso

More information

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would

More information

8.1 Introduction. 8. Nonparametric Inference Using Orthogonal Functions

8.1 Introduction. 8. Nonparametric Inference Using Orthogonal Functions 8. Noparametric Iferece Usig Orthogoal Fuctios 1. Itroductio. Noparametric Regressio 3. Irregular Desigs 4. Desity Estimatio 5. Compariso of Methods 8.1 Itroductio Use a orthogoal basis to covert oparametric

More information

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes. Term Test October 3, 003 Name Math 56 Studet Number Directio: This test is worth 50 poits. You are required to complete this test withi 50 miutes. I order to receive full credit, aswer each problem completely

More information

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters

More information

32 estimating the cumulative distribution function

32 estimating the cumulative distribution function 32 estimatig the cumulative distributio fuctio 4.6 types of cofidece itervals/bads Let F be a class of distributio fuctios F ad let θ be some quatity of iterest, such as the mea of F or the whole fuctio

More information

A NEW METHOD FOR CONSTRUCTING APPROXIMATE CONFIDENCE INTERVALS FOR M-ESTU1ATES. Dennis D. Boos

A NEW METHOD FOR CONSTRUCTING APPROXIMATE CONFIDENCE INTERVALS FOR M-ESTU1ATES. Dennis D. Boos .- A NEW METHOD FOR CONSTRUCTING APPROXIMATE CONFIDENCE INTERVALS FOR M-ESTU1ATES by Deis D. Boos Departmet of Statistics North Carolia State Uiversity Istitute of Statistics Mimeo Series #1198 September,

More information

Lecture 11 and 12: Basic estimation theory

Lecture 11 and 12: Basic estimation theory Lecture ad 2: Basic estimatio theory Sprig 202 - EE 94 Networked estimatio ad cotrol Prof. Kha March 2 202 I. MAXIMUM-LIKELIHOOD ESTIMATORS The maximum likelihood priciple is deceptively simple. Louis

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

Estimation for Complete Data

Estimation for Complete Data Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of

More information

Maximum likelihood estimation from record-breaking data for the generalized Pareto distribution

Maximum likelihood estimation from record-breaking data for the generalized Pareto distribution METRON - Iteratioal Joural of Statistics 004, vol. LXII,. 3, pp. 377-389 NAGI S. ABD-EL-HAKIM KHALAF S. SULTAN Maximum likelihood estimatio from record-breakig data for the geeralized Pareto distributio

More information

Estimation of the essential supremum of a regression function

Estimation of the essential supremum of a regression function Estimatio of the essetial supremum of a regressio fuctio Michael ohler, Adam rzyżak 2, ad Harro Walk 3 Fachbereich Mathematik, Techische Uiversität Darmstadt, Schlossgartestr. 7, 64289 Darmstadt, Germay,

More information

Response Variable denoted by y it is the variable that is to be predicted measure of the outcome of an experiment also called the dependent variable

Response Variable denoted by y it is the variable that is to be predicted measure of the outcome of an experiment also called the dependent variable Statistics Chapter 4 Correlatio ad Regressio If we have two (or more) variables we are usually iterested i the relatioship betwee the variables. Associatio betwee Variables Two variables are associated

More information

Linear Regression Demystified

Linear Regression Demystified Liear Regressio Demystified Liear regressio is a importat subject i statistics. I elemetary statistics courses, formulae related to liear regressio are ofte stated without derivatio. This ote iteds to

More information

Approximate Confidence Interval for the Reciprocal of a Normal Mean with a Known Coefficient of Variation

Approximate Confidence Interval for the Reciprocal of a Normal Mean with a Known Coefficient of Variation Metodološki zvezki, Vol. 13, No., 016, 117-130 Approximate Cofidece Iterval for the Reciprocal of a Normal Mea with a Kow Coefficiet of Variatio Wararit Paichkitkosolkul 1 Abstract A approximate cofidece

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 6 Sequeces ad Series of Fuctios 6.1. Covergece of a Sequece of Fuctios Poitwise Covergece. Defiitio 6.1. Let, for each N, fuctio f : A R be defied. If, for each x A, the sequece (f (x)) coverges

More information

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample. Statistical Iferece (Chapter 10) Statistical iferece = lear about a populatio based o the iformatio provided by a sample. Populatio: The set of all values of a radom variable X of iterest. Characterized

More information

Efficient GMM LECTURE 12 GMM II

Efficient GMM LECTURE 12 GMM II DECEMBER 1 010 LECTURE 1 II Efficiet The estimator depeds o the choice of the weight matrix A. The efficiet estimator is the oe that has the smallest asymptotic variace amog all estimators defied by differet

More information

Estimation of the Mean and the ACVF

Estimation of the Mean and the ACVF Chapter 5 Estimatio of the Mea ad the ACVF A statioary process {X t } is characterized by its mea ad its autocovariace fuctio γ ), ad so by the autocorrelatio fuctio ρ ) I this chapter we preset the estimators

More information

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense,

3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense, 3. Z Trasform Referece: Etire Chapter 3 of text. Recall that the Fourier trasform (FT) of a DT sigal x [ ] is ω ( ) [ ] X e = j jω k = xe I order for the FT to exist i the fiite magitude sese, S = x [

More information

Investigating the Significance of a Correlation Coefficient using Jackknife Estimates

Investigating the Significance of a Correlation Coefficient using Jackknife Estimates Iteratioal Joural of Scieces: Basic ad Applied Research (IJSBAR) ISSN 2307-4531 (Prit & Olie) http://gssrr.org/idex.php?joural=jouralofbasicadapplied ---------------------------------------------------------------------------------------------------------------------------

More information

Output Analysis and Run-Length Control

Output Analysis and Run-Length Control IEOR E4703: Mote Carlo Simulatio Columbia Uiversity c 2017 by Marti Haugh Output Aalysis ad Ru-Legth Cotrol I these otes we describe how the Cetral Limit Theorem ca be used to costruct approximate (1 α%

More information

ON BARTLETT CORRECTABILITY OF EMPIRICAL LIKELIHOOD IN GENERALIZED POWER DIVERGENCE FAMILY. Lorenzo Camponovo and Taisuke Otsu.

ON BARTLETT CORRECTABILITY OF EMPIRICAL LIKELIHOOD IN GENERALIZED POWER DIVERGENCE FAMILY. Lorenzo Camponovo and Taisuke Otsu. ON BARTLETT CORRECTABILITY OF EMPIRICAL LIKELIHOOD IN GENERALIZED POWER DIVERGENCE FAMILY By Lorezo Campoovo ad Taisuke Otsu October 011 COWLES FOUNDATION DISCUSSION PAPER NO. 185 COWLES FOUNDATION FOR

More information

Lecture 11 Simple Linear Regression

Lecture 11 Simple Linear Regression Lecture 11 Simple Liear Regressio Fall 2013 Prof. Yao Xie, yao.xie@isye.gatech.edu H. Milto Stewart School of Idustrial Systems & Egieerig Georgia Tech Midterm 2 mea: 91.2 media: 93.75 std: 6.5 2 Meddicorp

More information

Element sampling: Part 2

Element sampling: Part 2 Chapter 4 Elemet samplig: Part 2 4.1 Itroductio We ow cosider uequal probability samplig desigs which is very popular i practice. I the uequal probability samplig, we ca improve the efficiecy of the resultig

More information

Math 113 Exam 3 Practice

Math 113 Exam 3 Practice Math Exam Practice Exam 4 will cover.-., 0. ad 0.. Note that eve though. was tested i exam, questios from that sectios may also be o this exam. For practice problems o., refer to the last review. This

More information

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Discrete Mathematics for CS Spring 2008 David Wagner Note 22 CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 22 I.I.D. Radom Variables Estimatig the bias of a coi Questio: We wat to estimate the proportio p of Democrats i the US populatio, by takig

More information

There is no straightforward approach for choosing the warmup period l.

There is no straightforward approach for choosing the warmup period l. B. Maddah INDE 504 Discrete-Evet Simulatio Output Aalysis () Statistical Aalysis for Steady-State Parameters I a otermiatig simulatio, the iterest is i estimatig the log ru steady state measures of performace.

More information

Bull. Korean Math. Soc. 36 (1999), No. 3, pp. 451{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Seung Hoe Choi and Hae Kyung

Bull. Korean Math. Soc. 36 (1999), No. 3, pp. 451{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Seung Hoe Choi and Hae Kyung Bull. Korea Math. Soc. 36 (999), No. 3, pp. 45{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Abstract. This paper provides suciet coditios which esure the strog cosistecy of regressio

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 9 Multicolliearity Dr Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Multicolliearity diagostics A importat questio that

More information

Simple Linear Regression

Simple Linear Regression Simple Liear Regressio 1. Model ad Parameter Estimatio (a) Suppose our data cosist of a collectio of pairs (x i, y i ), where x i is a observed value of variable X ad y i is the correspodig observatio

More information

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain Assigmet 9 Exercise 5.5 Let X biomial, p, where p 0, 1 is ukow. Obtai cofidece itervals for p i two differet ways: a Sice X / p d N0, p1 p], the variace of the limitig distributio depeds oly o p. Use the

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

The standard deviation of the mean

The standard deviation of the mean Physics 6C Fall 20 The stadard deviatio of the mea These otes provide some clarificatio o the distictio betwee the stadard deviatio ad the stadard deviatio of the mea.. The sample mea ad variace Cosider

More information

MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE

MOMENT-METHOD ESTIMATION BASED ON CENSORED SAMPLE Vol. 8 o. Joural of Systems Sciece ad Complexity Apr., 5 MOMET-METHOD ESTIMATIO BASED O CESORED SAMPLE I Zhogxi Departmet of Mathematics, East Chia Uiversity of Sciece ad Techology, Shaghai 37, Chia. Email:

More information

A Relationship Between the One-Way MANOVA Test Statistic and the Hotelling Lawley Trace Test Statistic

A Relationship Between the One-Way MANOVA Test Statistic and the Hotelling Lawley Trace Test Statistic http://ijspccseetorg Iteratioal Joural of Statistics ad Probability Vol 7, No 6; 2018 A Relatioship Betwee the Oe-Way MANOVA Test Statistic ad the Hotellig Lawley Trace Test Statistic Hasthika S Rupasighe

More information

Polynomial Functions and Their Graphs

Polynomial Functions and Their Graphs Polyomial Fuctios ad Their Graphs I this sectio we begi the study of fuctios defied by polyomial expressios. Polyomial ad ratioal fuctios are the most commo fuctios used to model data, ad are used extesively

More information

4 Conditional Distribution Estimation

4 Conditional Distribution Estimation 4 Coditioal Distributio Estimatio 4. Estimators Te coditioal distributio (CDF) of y i give X i = x is F (y j x) = P (y i y j X i = x) = E ( (y i y) j X i = x) : Tis is te coditioal mea of te radom variable

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

Statistical Properties of OLS estimators

Statistical Properties of OLS estimators 1 Statistical Properties of OLS estimators Liear Model: Y i = β 0 + β 1 X i + u i OLS estimators: β 0 = Y β 1X β 1 = Best Liear Ubiased Estimator (BLUE) Liear Estimator: β 0 ad β 1 are liear fuctio of

More information

ECON 3150/4150, Spring term Lecture 3

ECON 3150/4150, Spring term Lecture 3 Itroductio Fidig the best fit by regressio Residuals ad R-sq Regressio ad causality Summary ad ext step ECON 3150/4150, Sprig term 2014. Lecture 3 Ragar Nymoe Uiversity of Oslo 21 Jauary 2014 1 / 30 Itroductio

More information

3.2 Properties of Division 3.3 Zeros of Polynomials 3.4 Complex and Rational Zeros of Polynomials

3.2 Properties of Division 3.3 Zeros of Polynomials 3.4 Complex and Rational Zeros of Polynomials Math 60 www.timetodare.com 3. Properties of Divisio 3.3 Zeros of Polyomials 3.4 Complex ad Ratioal Zeros of Polyomials I these sectios we will study polyomials algebraically. Most of our work will be cocered

More information

ESTIMATION AND PREDICTION BASED ON K-RECORD VALUES FROM NORMAL DISTRIBUTION

ESTIMATION AND PREDICTION BASED ON K-RECORD VALUES FROM NORMAL DISTRIBUTION STATISTICA, ao LXXIII,. 4, 013 ESTIMATION AND PREDICTION BASED ON K-RECORD VALUES FROM NORMAL DISTRIBUTION Maoj Chacko Departmet of Statistics, Uiversity of Kerala, Trivadrum- 695581, Kerala, Idia M. Shy

More information

Double Stage Shrinkage Estimator of Two Parameters. Generalized Exponential Distribution

Double Stage Shrinkage Estimator of Two Parameters. Generalized Exponential Distribution Iteratioal Mathematical Forum, Vol., 3, o. 3, 3-53 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.9/imf.3.335 Double Stage Shrikage Estimator of Two Parameters Geeralized Expoetial Distributio Alaa M.

More information

G. R. Pasha Department of Statistics Bahauddin Zakariya University Multan, Pakistan

G. R. Pasha Department of Statistics Bahauddin Zakariya University Multan, Pakistan Deviatio of the Variaces of Classical Estimators ad Negative Iteger Momet Estimator from Miimum Variace Boud with Referece to Maxwell Distributio G. R. Pasha Departmet of Statistics Bahauddi Zakariya Uiversity

More information

Department of Mathematics

Department of Mathematics Departmet of Mathematics Ma 3/103 KC Border Itroductio to Probability ad Statistics Witer 2017 Lecture 19: Estimatio II Relevat textbook passages: Larse Marx [1]: Sectios 5.2 5.7 19.1 The method of momets

More information

Chapter 8: Estimating with Confidence

Chapter 8: Estimating with Confidence Chapter 8: Estimatig with Cofidece Sectio 8.2 The Practice of Statistics, 4 th editio For AP* STARNES, YATES, MOORE Chapter 8 Estimatig with Cofidece 8.1 Cofidece Itervals: The Basics 8.2 8.3 Estimatig

More information

4.5 Multiple Imputation

4.5 Multiple Imputation 45 ultiple Imputatio Itroductio Assume a parametric model: y fy x; θ We are iterested i makig iferece about θ I Bayesia approach, we wat to make iferece about θ from fθ x, y = πθfy x, θ πθfy x, θdθ where

More information

Zeros of Polynomials

Zeros of Polynomials Math 160 www.timetodare.com 4.5 4.6 Zeros of Polyomials I these sectios we will study polyomials algebraically. Most of our work will be cocered with fidig the solutios of polyomial equatios of ay degree

More information

Section 14. Simple linear regression.

Section 14. Simple linear regression. Sectio 14 Simple liear regressio. Let us look at the cigarette dataset from [1] (available to dowload from joural s website) ad []. The cigarette dataset cotais measuremets of tar, icotie, weight ad carbo

More information

Statistical Inference Based on Extremum Estimators

Statistical Inference Based on Extremum Estimators T. Rotheberg Fall, 2007 Statistical Iferece Based o Extremum Estimators Itroductio Suppose 0, the true value of a p-dimesioal parameter, is kow to lie i some subset S R p : Ofte we choose to estimate 0

More information

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices

Random Matrices with Blocks of Intermediate Scale Strongly Correlated Band Matrices Radom Matrices with Blocks of Itermediate Scale Strogly Correlated Bad Matrices Jiayi Tog Advisor: Dr. Todd Kemp May 30, 07 Departmet of Mathematics Uiversity of Califoria, Sa Diego Cotets Itroductio Notatio

More information

ON JUMP DETECTION IN REGRESSION CURVES USING LOCAL POLYNOMIAL KERNEL ESTIMATION

ON JUMP DETECTION IN REGRESSION CURVES USING LOCAL POLYNOMIAL KERNEL ESTIMATION Pak J Statist 200x, Vol xxx, xx-xx ON JUMP DETECTION IN REGRESSION CURVES USING LOCAL POLYNOMIAL KERNEL ESTIMATION Bo Zhag 1, Zhihua Su 2, ad Peihua Qiu 3 1 School of Statistics, Uiversity of Miesota,

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 016 MODULE : Statistical Iferece Time allowed: Three hours Cadidates should aswer FIVE questios. All questios carry equal marks. The umber

More information

Solutions: Homework 3

Solutions: Homework 3 Solutios: Homework 3 Suppose that the radom variables Y,...,Y satisfy Y i = x i + " i : i =,..., IID where x,...,x R are fixed values ad ",...," Normal(0, )with R + kow. Fid ˆ = MLE( ). IND Solutio: Observe

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit theorems Throughout this sectio we will assume a probability space (Ω, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

1.010 Uncertainty in Engineering Fall 2008

1.010 Uncertainty in Engineering Fall 2008 MIT OpeCourseWare http://ocw.mit.edu.00 Ucertaity i Egieerig Fall 2008 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu.terms. .00 - Brief Notes # 9 Poit ad Iterval

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit Theorems Throughout this sectio we will assume a probability space (, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Optimally Sparse SVMs

Optimally Sparse SVMs A. Proof of Lemma 3. We here prove a lower boud o the umber of support vectors to achieve geeralizatio bouds of the form which we cosider. Importatly, this result holds ot oly for liear classifiers, but

More information

Reliability and Queueing

Reliability and Queueing Copyright 999 Uiversity of Califoria Reliability ad Queueig by David G. Messerschmitt Supplemetary sectio for Uderstadig Networked Applicatios: A First Course, Morga Kaufma, 999. Copyright otice: Permissio

More information

V. Nollau Institute of Mathematical Stochastics, Technical University of Dresden, Germany

V. Nollau Institute of Mathematical Stochastics, Technical University of Dresden, Germany PROBABILITY AND STATISTICS Vol. III - Correlatio Aalysis - V. Nollau CORRELATION ANALYSIS V. Nollau Istitute of Mathematical Stochastics, Techical Uiversity of Dresde, Germay Keywords: Radom vector, multivariate

More information

TAMS24: Notations and Formulas

TAMS24: Notations and Formulas TAMS4: Notatios ad Formulas Basic otatios ad defiitios X: radom variable stokastiska variabel Mea Vätevärde: µ = X = by Xiagfeg Yag kpx k, if X is discrete, xf Xxdx, if X is cotiuous Variace Varias: =

More information

Notes On Median and Quantile Regression. James L. Powell Department of Economics University of California, Berkeley

Notes On Median and Quantile Regression. James L. Powell Department of Economics University of California, Berkeley Notes O Media ad Quatile Regressio James L. Powell Departmet of Ecoomics Uiversity of Califoria, Berkeley Coditioal Media Restrictios ad Least Absolute Deviatios It is well-kow that the expected value

More information

Lorenzo Camponovo, Taisuke Otsu On Bartlett correctability of empirical likelihood in generalized power divergence family

Lorenzo Camponovo, Taisuke Otsu On Bartlett correctability of empirical likelihood in generalized power divergence family Lorezo Campoovo, Taisuke Otsu O Bartlett correctability of empirical likelihood i geeralized power divergece family Article Accepted versio) Refereed) Origial citatio: Campoovo, Lorezo ad Otsu, Taisuke

More information

Quantile regression with multilayer perceptrons.

Quantile regression with multilayer perceptrons. Quatile regressio with multilayer perceptros. S.-F. Dimby ad J. Rykiewicz Uiversite Paris 1 - SAMM 90 Rue de Tolbiac, 75013 Paris - Frace Abstract. We cosider oliear quatile regressio ivolvig multilayer

More information

Expectation and Variance of a random variable

Expectation and Variance of a random variable Chapter 11 Expectatio ad Variace of a radom variable The aim of this lecture is to defie ad itroduce mathematical Expectatio ad variace of a fuctio of discrete & cotiuous radom variables ad the distributio

More information

II. Descriptive Statistics D. Linear Correlation and Regression. 1. Linear Correlation

II. Descriptive Statistics D. Linear Correlation and Regression. 1. Linear Correlation II. Descriptive Statistics D. Liear Correlatio ad Regressio I this sectio Liear Correlatio Cause ad Effect Liear Regressio 1. Liear Correlatio Quatifyig Liear Correlatio The Pearso product-momet correlatio

More information

A statistical method to determine sample size to estimate characteristic value of soil parameters

A statistical method to determine sample size to estimate characteristic value of soil parameters A statistical method to determie sample size to estimate characteristic value of soil parameters Y. Hojo, B. Setiawa 2 ad M. Suzuki 3 Abstract Sample size is a importat factor to be cosidered i determiig

More information

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i

More information

1 Covariance Estimation

1 Covariance Estimation Eco 75 Lecture 5 Covariace Estimatio ad Optimal Weightig Matrices I this lecture, we cosider estimatio of the asymptotic covariace matrix B B of the extremum estimator b : Covariace Estimatio Lemma 4.

More information

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments:

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. Comments: Recall: STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS Commets:. So far we have estimates of the parameters! 0 ad!, but have o idea how good these estimates are. Assumptio: E(Y x)! 0 +! x (liear coditioal

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

Understanding Samples

Understanding Samples 1 Will Moroe CS 109 Samplig ad Bootstrappig Lecture Notes #17 August 2, 2017 Based o a hadout by Chris Piech I this chapter we are goig to talk about statistics calculated o samples from a populatio. We

More information