IMPROVING EFFICIENT MARGINAL ESTIMATORS IN BIVARIATE MODELS WITH PARAMETRIC MARGINALS

Size: px
Start display at page:

Download "IMPROVING EFFICIENT MARGINAL ESTIMATORS IN BIVARIATE MODELS WITH PARAMETRIC MARGINALS"

Transcription

1 IMPROVING EFFICIENT MARGINAL ESTIMATORS IN BIVARIATE MODELS WITH PARAMETRIC MARGINALS HANXIANG PENG AND ANTON SCHICK Abstract. Suppose we have data from a bivariate model with parametric margials. Efficiet estimators of the parameters i the margial models are geerally ot efficiet i the bivariate model. I this article, we propose a method of improvig these margial estimators ad demostrate that the magitude of this improvemet ca be as large as 1 percet i some cases. 1. Itroductio Let (X, Y ) be a bivariate radom vector with distributio Q. Let F deote the distributio of X ad G the distributio of Y. We assume that F ad G belog to regular parametric models, but that Q is ukow otherwise. The F = F α ad G = G β. The parameter α ca typically be estimated efficietly by the maximum likelihood estimator ˆα based o a radom sample X 1,..., X from F α. Similarly, the parameter β ca be estimated efficietly by the maximum likelihood estimator ˆβ based o a radom sample Y 1,..., Y from G β. If a radom sample (X 1, Y 1 ),..., (X, Y ) from Q is observed, the the margial estimator ˆα may o loger be efficiet as there may be iformatio about α cotaied i the data Y 1,..., Y. A similar observatio applies to ˆβ. I this paper we propose a simple method of usig possible iformatio about α cotaied i Y 1,..., Y. More precisely, we propose to subtract from ˆα a properly weighted stochastic term that is a asymptotically ubiased estimator of zero. The resultig estimator has a asymptotic dispersio matrix that is at most as large as the asymptotic dispersio matrix of ˆα. A variat of our approach has already be used i improvig oparametric estimators i the presece of a costrait, see e.g. Schick ad Wefelmeyer (28) for a recet overview of such methods. We give the details for our method i Sectio 2. I Sectio 3 we discuss the magitude of improvemet possible by this method. There it is demostrated that improvemets of up to 1 percet are possible. Sectio 4 discusses expoetial margials. We coclude this itroductio by specifyig the otio of a regular parametric model as used throughout this paper. We use regularity i the sese of Bickel et al (1993, page 12-13) to mea cotiuous Helliger-differetiability with positive defiite iformatio matrices. Here is the formal defiitio. Defiitio 1. A model H = {H θ : θ Θ} parametrized by a ope subset Θ of R k is regular if it is domiated by a measure ν, if the map θ h θ = Haxiag Peg is supported by NSF Grat DMS Ato Schick was supported by NSF Grat DMS

2 2 HANXIANG PENG AND ANTON SCHICK dhθ /dν is cotiuously differetiable i L 2 (ν) (whose derivative we may write as θ (1/2) κ(, θ) h θ for some κ(, θ) L k 2(H θ ) with κ(x, θ) dh θ (x) = ), ad if J(θ) = κ(x, θ) κ (x, θ) dh θ (x) is positive for each θ. The κ(, θ) is called the score fuctio at θ ad J(θ) the iformatio matrix at θ. 2. The Method Let (X, Y ) be a bivariate radom vector with distributio Q. Let F deote the distributio of X ad G the distributio of Y. We assume that F ad G belog to parametric models, but that Q is ukow otherwise. Let F = {F s : s Θ 1 } deote the parametric model for F with Θ 1 a ope subset of R k1 ad G = {G t : t Θ 2 } the parametric model for G, with Θ 2 a ope subset of R k2. We assume that the models F ad G are regular. Sice we assume that F belogs to F ad G belogs to G, there are α Θ 1 ad β Θ 2 such that F = F α ad G = G β. We write κ 1 (, s) for the score fuctio of the model F at s, deote the correspodig iformatio matrix by J 1 (s) = κ 1 (x, s) κ 1 (x, s) df s (x) ad set ψ 1 (x, s) = J 1 1 (s) κ 1(x, s), s Θ 1. We write κ 2 (, t) for the score fuctio of the model G at t, deote the correspodig iformatio matrix by J 2 (t) = κ 2 (y, t) κ 2 (y, t) dg t (y) ad set ψ 2 (y, t) = J 1 2 (t) κ 2(y, t), t Θ 2. Suppose we observe idepedet copies X 1,..., X of X oly. The a efficiet estimator ˆα of α must satisfy (1) ˆα = α + 1 ψ 1 (X j, α) + o p ( 1/2 ). Similarly, if oly idepedet copies Y 1,..., Y of Y are observable, the a efficiet estimator ˆβ of β must satisfy (2) ˆβ = β + 1 ψ 2 (Y j, β) + o p ( 1/2 ). Typically maximum likelihood estimators possess these properties. We assume from ow o that we have available estimators ˆα ad ˆβ that satisfy (1) ad (2). Now suppose that we observe idepedet copies (X 1, Y 1 ),..., (X, Y ) of the pair (X, Y ). The above estimators use oly the margial iformatio. Due to depedece, iformatio about α may also be cotaied i the data Y 1,..., Y, ad iformatio about β may be cotaied i the data X 1,..., X. Thus the estimators ˆα ad ˆβ may o loger be efficiet ad better estimators may be available. I this paper we demostrate a simple approach of usig the data Y 1,..., Y to improve the estimator ˆα of α. By symmetry this also provides a method of improvig the estimator ˆβ of β. By our assumptio o the models, G is domiated by a σ-fiite measure µ. Write g t for the desity of G t. Now let us look at a fuctio W from R Θ 2 to R m such that, for each t i Θ 2, (3) W (y, t) dg t (y) =,

3 (4) IMPROVING EFFICIENT MARGINAL ESTIMATORS WITH BIVARIATE DATA 3 W (y, t) κ 2 (y, t) dg t (y) =, (5) W (y, t)w (y, t) dg t (y) = I m, with I m the m m idetity matrix. Suppose also that (6) W (y, β )g 1/2 β (y) W (y, β)g 1/2 β (y) 2 dµ(y) as β β. It the follows from Schick (21) that 1/2 [W (Y j, β ) W (Y j, β) + E[W (Y, β) κ 2 (Y, β)](β β)] = o p (1) for every sequece β i Θ 2 such that 1/2 (β β) is bouded. This eve holds if we replace β by a discretized versio of ˆβ. To simplify otatio we require that this holds without discretizatio. I view of (4) we the have (7) 1/2 W (Y j, ˆβ ) = 1/2 W (Y j, β) + o p (1). For a k 1 m matrix D, cosider the estimator ˆα (D) of α defied by ˆα (D) = ˆα 1 DW (Y j, ˆβ ). It has expasio ˆα (D) = α + 1 [ψ 1 (X j, α) DW (Y j, β)] + o p ( 1/2 ). Thus 1/2 (ˆα (D) α) coverges i distributio to a ormal radom vector with mea zero ad the same dispersio matrix Ψ(D) as ψ 1 (X, α) DW (Y, β). It is straightforward to check that this dispersio matrix is miimized by D = D = E[ψ 1 (X, α)w (Y, β)], resultig i the miimal dispersio matrix Ψ(D ) = J 1 1 (α) D D. Sice D D is o-egative defiite, the estimator ˆα (D ) is asymptotically o more dispersed tha the estimator ˆα. I the case that X ad Y are idepedet, there is o iformatio o α cotaied i the data Y 1,..., Y ad o improvemet over ˆα is possible. I this sceario, we have D = ad Ψ(D ) = J1 1 (α). The matrix D is ukow ad must be estimated. This ca be doe by ˆD = 1 ψ 1 (X j, ˆα )W (Y j, ˆβ ). This estimator is cosistet if 1 (8) ψ 1 (X j, ˆα ) ψ 1 (X j, α) 2 = o p (1),

4 4 HANXIANG PENG AND ANTON SCHICK ad (9) 1 W (Y j, ˆβ ) W (Y j, β) 2 = o p (1). Thus, uder assumptios (7)-(9) we fid that the estimator satisfies (1) ˆα = α + 1 ˆα = ˆα ˆD 1 W (Y j, ˆβ ) [ψ 1 (X j, α) D W (Y j, β)] + o p ( 1/2 ). Hece (ˆα α) is asymptotically ormal with zero mea vector ad dispersio matrix Ψ(D ). Thus ˆα improves upo ˆα uless D =. We have already see that D = if X ad Y are idepedet. But D = ca also happe with depedet data. For example, if X ad Y have ormal margials ad are joitly ormal, the the maximum likelihood estimators of the meas ad variaces based o the margial observatios coicide with the likelihood estimators based o the joit distributio ad the margial estimators are also efficiet for the bivariate data ad thus caot be improved. I the ext sectio we shall discuss the magitude of the improvemets i some special cases. There we shall see that improvemets of up to 1 percet are possible. Remark 1. Our method ca also be used if the margial distributio G is kow. I this case we replace W (Y j, ˆβ ) by W (Y j ) where W eeds to oly satisfy W dg = ad W W dg = I m. 3. O the Magitude of the Improvemet To see how large a gai ca be achieved with our improvemet procedure we cosider ow a bivariate locatio model. Let f be a symmetric desity that has fiite Fisher iformatio I(f) for locatio. This meas that f is absolutely cotiuous ad I(f) = l 2 f (x)f(x) dx is fiite, where l f = f /f is the score fuctio for locatio. We take F to be the locatio model F = {F s : s R} geerated by f i which df s (x) = f(x s) dx. It is well kow that this model is regular with κ 1 (x, s) = l f (x s), J 1 (s) = I(f), ψ 1 (x, s) = l f (x s)/i(f). For G we take the locatio model geerated by a symmetric desity g with fiite Fisher iformatio I(g) for locatio. The G is regular with κ 2 (y, t) = l g (y t), J 2 (t) = I(g), ψ 2 (y, t) = l g (y t)/i(g).

5 IMPROVING EFFICIENT MARGINAL ESTIMATORS WITH BIVARIATE DATA 5 As the uderlyig joit distributio we take the bivariate probability measure Q which has a desity q of the form (11) q(x, y) = (1 + ρu(x α)v(y β))f(x α)g(y β), x, y R, for some (ukow) reals α ad β ad some ρ i [ 1, 1]. Here u ad v are measurable fuctios with values i [ 1, 1] such that u(x)f(x) dx = ad v(y)g(y) dy =. Some of the copula models are of this form. Ideed, the Farlie-Gumbel- Morgester copula model is of this type. See Hutchiso ad Lai (199) ad Nelse (1999) for properties ad applicatios of this copula model as well as for additioal refereces. We take W (y, t) to be of the form w(y t) for some fuctio w. The above distributio Q allows ow for a simple calculatio of D. Ideed, uder Q, we calculate I(f)D = ρe[u(x α)l f (X α)]e[v(y β)w(y β)] = ρ u(x)l f (x)f(x) dx v(y)w(y)g(y) dy. The required coditios (3)-(6) are implied if we take w so that (12) w(y)g(y) dy =, w 2 (y)g(y) dy = 1, w(y)g (y) dy =. A possible choice for w is w (x) = 21 [ c,c] (x) 1 = { 1, x c, 1, x > c, with c the third quartile of g. Ideed, the the first part of (12) follows from the choice of c, the secod part follows from the fact that w = 1, ad the third part follows as w is eve ad g is odd. The asymptotic relative efficiecy (ARE) of the proposed estimator ˆα to the margial estimator ˆα is the ratio of the asymptotic variace 1/I(f) D 2 of ˆα ad the asymptotic variace 1/I(f) of ˆα : ARE = 1 I(f)D 2 = 1 ρ2 I(f) ( u(x)l f (x)f(x) dx As v is bouded by 1, we see that ( ARE 1 ρ2 2 u(x)l f (x)f(x) dx) I(f) v(y)w(y)g(y) dy) 2. with equality if v = w = w. Note that u = sig(l f ) is a odd fuctio i view of the symmetry of f so that sig(l f (x))f(x) dx = ad that u = 1. Note also that this u maximizes u(x)l f (x)f(x) dx subject to u 1 ad u(x)f(x) dx =. Thus we obtai ( (13) ARE 1 ρ2 2 f (x) dx) I(f) with equality if u = sig(l f ) ad v = w = w. Example 1. Let us choose f ad g to be the stadard ormal desity p give by p(x) = (1/ 2π) exp( x 2 /2). This desity has fiite Fisher iformatio I(p) = 1 ad score fuctio l p (x) = x. Based o the margial observatios X 1,..., X, the maximum likelihood estimator ˆα is the sample mea. The sample mea is

6 6 HANXIANG PENG AND ANTON SCHICK efficiet i the (stadard) ormal locatio model F with asymptotic variace 1. For v = w = w ad u(x) = sig(x), we obtai ARE = 1 ρ 2 2 ( ) 2 x exp( x 2 2ρ 2 /2) dx = 1 π π. Thus the efficiecy gai ca be as large as is 2/π or about 63.7 percet. Example 2. Let us choose f ad g to be the Cauchy desity give by p(x) = 1/(π(1+x 2 )). This desity has fiite Fisher iformatio for locatio I(p) = 1/2 ad score fuctio l p (x) = 2x/(1 + x 2 ). Based o the margial observatios X 1,..., X, the maximum likelihood estimator ˆα is a solutio to the score equatio X i ˆα 1 + (X i ˆα ) 2 =. i=1 This estimator ˆα is efficiet i the Cauchy-locatio model F with asymptotic variace 2. The third quartile of the Cauchy desity p is c = 1. For v = w = w = 21 [ 1,1] 1 ad u(x) = sig(x), we have ARE = 1 2ρ 2( 2 2x ) 2 π(1 + x 2 ) 2 dx 8ρ 2 = 1 π 2. Thus the efficiecy gai ca be as large as is 8/π 2 or about 81 percet. Example 3. Let us choose f ad g to be the Laplace (or double expoetial) desity p give by p(x) = (1/2) exp( x ). This desity has fiite Fisher iformatio I(p) = 1 ad score fuctio l p (x) = sig(x). The maximum likelihood estimator ˆα is the sample media of X 1,..., X. It is efficiet i the Laplace locatio model F ad has asymptotic variace 1. The third quartile of the Laplace desity p is c = log 2. For v = w = w = 21 [ log 2,log 2] 1 ad u(x) = sig(x), we obtai ARE = 1 ρ 2( 2 e dx) x = 1 ρ 2. Thus the efficiecy gai ca be as large as 1 or 1 percet. 4. Expoetial margials Let us ow assume that F = G = {F s : s > }, where F s is the expoetial distributio with mea s, i.e., F s has desity f s (x) = f(x/s)/s, where f(x) = exp( x)1 (, ) (x), x R. This model is regular with score fuctio κ(x, s) = (x s)/s 2 ad iformatio J(s) = 1/s 2. Thus ψ 1 (x, α) = x α ad ψ 2 (y, β) = y β. The maximum likelihood estimator based o a radom sample from this model is the sample mea ad is efficiet. Thus we take ˆα = X = (X 1 + +X )/ ad ˆβ = Ȳ = (Y 1 + +Y )/. Here we take W to be of the form W (y, t) = w(y/t) for some fuctio w. The required coditios (3)-(6) are implied if w satisfies (14) w(y)f(y) dy =, w 2 (y)f(y) dy = 1, w(y)(y 1)f(y) dy =.

7 IMPROVING EFFICIENT MARGINAL ESTIMATORS WITH BIVARIATE DATA 7 Possible choices for w are give by w = w (c,d) = (1 (c,d) p)/ p(1 p), where p = exp( c) exp( d) ad < c < 1 < d < are chose to satisfy c exp( c) = d exp( d). We shall look at two differet joit distributios Q. The first joit distributio has distributio fuctio H give by (15) H(x, y) = γf (x/α)f (y/β) + (1 γ) mi(f (x/α), F (y/β)), x, y R, for some < γ < 1, where F is the distributio fuctio of f. The secod joit distributio has a desity q of the form (16) q(x, y) = (1 + ρu(x/α)v(y/β))f(x/α)f(y/β)/(αβ), x, y R, for some ρ i [ 1, 1]. Here u ad v are measurable fuctios with values i [ 1, 1] such that u(x)f(x) dx = ad v(y)f(y) dy =. We shall see that for the first joit distributio our method does ot provide ay improvemet. Ideed, H is the distributio fuctio of (α X, β(1[u γ]ỹ + 1[U > γ] X) where X, Ỹ ad U are idepedet, X ad Ỹ are expoetially distributed with mea 1, ad U is uiformly distributed o (, 1). Thus we ca express D = αe[( X 1)w(1[U γ]ỹ + 1[U > γ] X)] = α(e[1[u γ]( X 1)w(Ỹ )] + E[1[U > γ]( X 1)w( X)]. I view of the idepedece of U, X, Ỹ ad the last coditio i (14), we fid D =. Thus o improvemet is possible. For the secod distributio we calculate D = ρe[u(x/α)(x α)]e[v(y/β)w(y/β)] = αρ u(x)(x 1) exp( x) dx v(y)w(y) exp( y) dy. The asymptotic relative efficiecy of the proposed estimator ˆα to the margial estimator ˆα is the ratio of the asymptotic variace α 2 D 2 of ˆα ad the asymptotic variace α 2 of ˆα, amely, ARE = 1 D2 α 2 = 1 ρ2( u(x)(x 1) exp( x) dx v(y)w(y) exp( y) dy) 2. Let us ow take u = u = 1 (1, ) 1 (,1 δ) with δ = log(e 1). The the itegral ivolvig u becomes δ = exp( 1) + (1 δ) exp(δ 1) = 1 (1 e 1 ) log(e 1), ad accordigly, ( 2. ARE = 1 ρ 2 δ 2 v(y)w(y) exp( y) dy) For the choice w = w (c,d) we obtai p(1 p) ARE 1 ρ 2 δ 2 max(p, 1 p) with equality for v = (1 (c,d) p)/ max(p, 1 p). If c ad d are such that p = 1/2, the ARE = 1 ρ 2 δ 2. Sice δ , we see that improvemets of up to 56.7 percet are possible.

8 8 HANXIANG PENG AND ANTON SCHICK Refereces [1] Bickel, P.J., Klaasse, C.A.J., Ritov, Y. ad Weller, J.A. (1993). Efficiet ad adaptive estimatio i semiparametric models. Johs Hopkis Uiv. Press, Baltimore. [2] Hutchiso, T. P., ad Lai, C. D. (199) Cotiuous bivariate distributios, emphasisig applicatios. Rumsby Scietific Publishig, Adelaide. [3] Nelse, R. (1999). A itroductio to copulas. Lecture Notes i Statistics, 139. Spriger- Verlag, New York. [4] Haxiag Peg ad Ato Schick (24). Estimatio of Liear Fuctioals of a Bivariate Probability with Ukow Parametric Margials. Statistics & Decisios [5] Schick, A. (21). O asymptotic differetiability of averages. Statist. Probab. Lett., 51, [6] Schick, A. ad Wefelmeyer, W. (28). Some developmets i semiparametric models. To appear i: J. Stat. Theory Pract. Haxiag Peg, Departmet of Mathematical Scieces, Purdue School of Scieces, Idiaa Uiversity-Purdue Uiversity at Idiaapolis, Idiaapolis, IN , USA Ato Schick, Departmet of Mathematical Scieces, Bighamto Uiversity, Bighamto, NY , USA

Journal of Multivariate Analysis. Superefficient estimation of the marginals by exploiting knowledge on the copula

Journal of Multivariate Analysis. Superefficient estimation of the marginals by exploiting knowledge on the copula Joural of Multivariate Aalysis 102 (2011) 1315 1319 Cotets lists available at ScieceDirect Joural of Multivariate Aalysis joural homepage: www.elsevier.com/locate/jmva Superefficiet estimatio of the margials

More information

Unbiased Estimation. February 7-12, 2008

Unbiased Estimation. February 7-12, 2008 Ubiased Estimatio February 7-2, 2008 We begi with a sample X = (X,..., X ) of radom variables chose accordig to oe of a family of probabilities P θ where θ is elemet from the parameter space Θ. For radom

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Lecture 7: Properties of Random Samples

Lecture 7: Properties of Random Samples Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15 17. Joit distributios of extreme order statistics Lehma 5.1; Ferguso 15 I Example 10., we derived the asymptotic distributio of the maximum from a radom sample from a uiform distributio. We did this usig

More information

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1 EECS564 Estimatio, Filterig, ad Detectio Hwk 2 Sols. Witer 25 4. Let Z be a sigle observatio havig desity fuctio where. p (z) = (2z + ), z (a) Assumig that is a oradom parameter, fid ad plot the maximum

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Lecture 11 and 12: Basic estimation theory

Lecture 11 and 12: Basic estimation theory Lecture ad 2: Basic estimatio theory Sprig 202 - EE 94 Networked estimatio ad cotrol Prof. Kha March 2 202 I. MAXIMUM-LIKELIHOOD ESTIMATORS The maximum likelihood priciple is deceptively simple. Louis

More information

Inference for Alternating Time Series

Inference for Alternating Time Series Iferece for Alteratig Time Series Ursula U. Müller 1, Ato Schick 2, ad Wolfgag Wefelmeyer 3 1 Departmet of Statistics Texas A&M Uiversity College Statio, TX 77843-3143, USA (e-mail: uschi@stat.tamu.edu)

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

Advanced Stochastic Processes.

Advanced Stochastic Processes. Advaced Stochastic Processes. David Gamarik LECTURE 2 Radom variables ad measurable fuctios. Strog Law of Large Numbers (SLLN). Scary stuff cotiued... Outlie of Lecture Radom variables ad measurable fuctios.

More information

Element sampling: Part 2

Element sampling: Part 2 Chapter 4 Elemet samplig: Part 2 4.1 Itroductio We ow cosider uequal probability samplig desigs which is very popular i practice. I the uequal probability samplig, we ca improve the efficiecy of the resultig

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak

More information

ESTIMATING THE ERROR DISTRIBUTION FUNCTION IN NONPARAMETRIC REGRESSION WITH MULTIVARIATE COVARIATES

ESTIMATING THE ERROR DISTRIBUTION FUNCTION IN NONPARAMETRIC REGRESSION WITH MULTIVARIATE COVARIATES ESTIMATING THE ERROR DISTRIBUTION FUNCTION IN NONPARAMETRIC REGRESSION WITH MULTIVARIATE COVARIATES URSULA U. MÜLLER, ANTON SCHICK AND WOLFGANG WEFELMEYER Abstract. We cosider oparametric regressio models

More information

Regression with an Evaporating Logarithmic Trend

Regression with an Evaporating Logarithmic Trend Regressio with a Evaporatig Logarithmic Tred Peter C. B. Phillips Cowles Foudatio, Yale Uiversity, Uiversity of Aucklad & Uiversity of York ad Yixiao Su Departmet of Ecoomics Yale Uiversity October 5,

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

Mathematical Statistics - MS

Mathematical Statistics - MS Paper Specific Istructios. The examiatio is of hours duratio. There are a total of 60 questios carryig 00 marks. The etire paper is divided ito three sectios, A, B ad C. All sectios are compulsory. Questios

More information

Lecture 23: Minimal sufficiency

Lecture 23: Minimal sufficiency Lecture 23: Miimal sufficiecy Maximal reductio without loss of iformatio There are may sufficiet statistics for a give problem. I fact, X (the whole data set) is sufficiet. If T is a sufficiet statistic

More information

Bayesian Methods: Introduction to Multi-parameter Models

Bayesian Methods: Introduction to Multi-parameter Models Bayesia Methods: Itroductio to Multi-parameter Models Parameter: θ = ( θ, θ) Give Likelihood p(y θ) ad prior p(θ ), the posterior p proportioal to p(y θ) x p(θ ) Margial posterior ( θ, θ y) is Iterested

More information

Lecture 18: Sampling distributions

Lecture 18: Sampling distributions Lecture 18: Samplig distributios I may applicatios, the populatio is oe or several ormal distributios (or approximately). We ow study properties of some importat statistics based o a radom sample from

More information

The standard deviation of the mean

The standard deviation of the mean Physics 6C Fall 20 The stadard deviatio of the mea These otes provide some clarificatio o the distictio betwee the stadard deviatio ad the stadard deviatio of the mea.. The sample mea ad variace Cosider

More information

STAT Homework 1 - Solutions

STAT Homework 1 - Solutions STAT-36700 Homework 1 - Solutios Fall 018 September 11, 018 This cotais solutios for Homework 1. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation ECE 645: Estimatio Theory Sprig 2015 Istructor: Prof. Staley H. Cha Maximum Likelihood Estimatio (LaTeX prepared by Shaobo Fag) April 14, 2015 This lecture ote is based o ECE 645(Sprig 2015) by Prof. Staley

More information

Bull. Korean Math. Soc. 36 (1999), No. 3, pp. 451{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Seung Hoe Choi and Hae Kyung

Bull. Korean Math. Soc. 36 (1999), No. 3, pp. 451{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Seung Hoe Choi and Hae Kyung Bull. Korea Math. Soc. 36 (999), No. 3, pp. 45{457 THE STRONG CONSISTENCY OF NONLINEAR REGRESSION QUANTILES ESTIMATORS Abstract. This paper provides suciet coditios which esure the strog cosistecy of regressio

More information

1 of 7 7/16/2009 6:06 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 6. Order Statistics Defiitios Suppose agai that we have a basic radom experimet, ad that X is a real-valued radom variable

More information

6. Sufficient, Complete, and Ancillary Statistics

6. Sufficient, Complete, and Ancillary Statistics Sufficiet, Complete ad Acillary Statistics http://www.math.uah.edu/stat/poit/sufficiet.xhtml 1 of 7 7/16/2009 6:13 AM Virtual Laboratories > 7. Poit Estimatio > 1 2 3 4 5 6 6. Sufficiet, Complete, ad Acillary

More information

Rank tests and regression rank scores tests in measurement error models

Rank tests and regression rank scores tests in measurement error models Rak tests ad regressio rak scores tests i measuremet error models J. Jurečková ad A.K.Md.E. Saleh Charles Uiversity i Prague ad Carleto Uiversity i Ottawa Abstract The rak ad regressio rak score tests

More information

Chapter 6 Infinite Series

Chapter 6 Infinite Series Chapter 6 Ifiite Series I the previous chapter we cosidered itegrals which were improper i the sese that the iterval of itegratio was ubouded. I this chapter we are goig to discuss a topic which is somewhat

More information

An Introduction to Asymptotic Theory

An Introduction to Asymptotic Theory A Itroductio to Asymptotic Theory Pig Yu School of Ecoomics ad Fiace The Uiversity of Hog Kog Pig Yu (HKU) Asymptotic Theory 1 / 20 Five Weapos i Asymptotic Theory Five Weapos i Asymptotic Theory Pig Yu

More information

Berry-Esseen bounds for self-normalized martingales

Berry-Esseen bounds for self-normalized martingales Berry-Essee bouds for self-ormalized martigales Xiequa Fa a, Qi-Ma Shao b a Ceter for Applied Mathematics, Tiaji Uiversity, Tiaji 30007, Chia b Departmet of Statistics, The Chiese Uiversity of Hog Kog,

More information

Solutions: Homework 3

Solutions: Homework 3 Solutios: Homework 3 Suppose that the radom variables Y,...,Y satisfy Y i = x i + " i : i =,..., IID where x,...,x R are fixed values ad ",...," Normal(0, )with R + kow. Fid ˆ = MLE( ). IND Solutio: Observe

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5 CS434a/54a: Patter Recogitio Prof. Olga Veksler Lecture 5 Today Itroductio to parameter estimatio Two methods for parameter estimatio Maimum Likelihood Estimatio Bayesia Estimatio Itroducto Bayesia Decisio

More information

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam. Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the

More information

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise First Year Quatitative Comp Exam Sprig, 2012 Istructio: There are three parts. Aswer every questio i every part. Questio I-1 Part I - 203A A radom variable X is distributed with the margial desity: >

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

Lecture 33: Bootstrap

Lecture 33: Bootstrap Lecture 33: ootstrap Motivatio To evaluate ad compare differet estimators, we eed cosistet estimators of variaces or asymptotic variaces of estimators. This is also importat for hypothesis testig ad cofidece

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 6 9/23/2013. Brownian motion. Introduction

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 6 9/23/2013. Brownian motion. Introduction MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/5.070J Fall 203 Lecture 6 9/23/203 Browia motio. Itroductio Cotet.. A heuristic costructio of a Browia motio from a radom walk. 2. Defiitio ad basic properties

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit Theorems Throughout this sectio we will assume a probability space (, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Statistical Inference Based on Extremum Estimators

Statistical Inference Based on Extremum Estimators T. Rotheberg Fall, 2007 Statistical Iferece Based o Extremum Estimators Itroductio Suppose 0, the true value of a p-dimesioal parameter, is kow to lie i some subset S R p : Ofte we choose to estimate 0

More information

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn Stat 366 Lab 2 Solutios (September 2, 2006) page TA: Yury Petracheko, CAB 484, yuryp@ualberta.ca, http://www.ualberta.ca/ yuryp/ Review Questios, Chapters 8, 9 8.5 Suppose that Y, Y 2,..., Y deote a radom

More information

Distribution of Random Samples & Limit theorems

Distribution of Random Samples & Limit theorems STAT/MATH 395 A - PROBABILITY II UW Witer Quarter 2017 Néhémy Lim Distributio of Radom Samples & Limit theorems 1 Distributio of i.i.d. Samples Motivatig example. Assume that the goal of a study is to

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013 Large Deviatios for i.i.d. Radom Variables Cotet. Cheroff boud usig expoetial momet geeratig fuctios. Properties of a momet

More information

LECTURE 14 NOTES. A sequence of α-level tests {ϕ n (x)} is consistent if

LECTURE 14 NOTES. A sequence of α-level tests {ϕ n (x)} is consistent if LECTURE 14 NOTES 1. Asymptotic power of tests. Defiitio 1.1. A sequece of -level tests {ϕ x)} is cosistet if β θ) := E θ [ ϕ x) ] 1 as, for ay θ Θ 1. Just like cosistecy of a sequece of estimators, Defiitio

More information

On forward improvement iteration for stopping problems

On forward improvement iteration for stopping problems O forward improvemet iteratio for stoppig problems Mathematical Istitute, Uiversity of Kiel, Ludewig-Mey-Str. 4, D-24098 Kiel, Germay irle@math.ui-iel.de Albrecht Irle Abstract. We cosider the optimal

More information

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters

More information

SDS 321: Introduction to Probability and Statistics

SDS 321: Introduction to Probability and Statistics SDS 321: Itroductio to Probability ad Statistics Lecture 23: Cotiuous radom variables- Iequalities, CLT Puramrita Sarkar Departmet of Statistics ad Data Sciece The Uiversity of Texas at Austi www.cs.cmu.edu/

More information

lim za n n = z lim a n n.

lim za n n = z lim a n n. Lecture 6 Sequeces ad Series Defiitio 1 By a sequece i a set A, we mea a mappig f : N A. It is customary to deote a sequece f by {s } where, s := f(). A sequece {z } of (complex) umbers is said to be coverget

More information

Introducing a Novel Bivariate Generalized Skew-Symmetric Normal Distribution

Introducing a Novel Bivariate Generalized Skew-Symmetric Normal Distribution Joural of mathematics ad computer Sciece 7 (03) 66-7 Article history: Received April 03 Accepted May 03 Available olie Jue 03 Itroducig a Novel Bivariate Geeralized Skew-Symmetric Normal Distributio Behrouz

More information

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values of the secod half Biostatistics 6 - Statistical Iferece Lecture 6 Fial Exam & Practice Problems for the Fial Hyu Mi Kag Apil 3rd, 3 Hyu Mi Kag Biostatistics 6 - Lecture 6 Apil 3rd, 3 / 3 Rao-Blackwell

More information

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS

A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS J. Japa Statist. Soc. Vol. 41 No. 1 2011 67 73 A RANK STATISTIC FOR NON-PARAMETRIC K-SAMPLE AND CHANGE POINT PROBLEMS Yoichi Nishiyama* We cosider k-sample ad chage poit problems for idepedet data i a

More information

4. Partial Sums and the Central Limit Theorem

4. Partial Sums and the Central Limit Theorem 1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems

More information

Lecture 6 Ecient estimators. Rao-Cramer bound.

Lecture 6 Ecient estimators. Rao-Cramer bound. Lecture 6 Eciet estimators. Rao-Cramer boud. 1 MSE ad Suciecy Let X (X 1,..., X) be a radom sample from distributio f θ. Let θ ˆ δ(x) be a estimator of θ. Let T (X) be a suciet statistic for θ. As we have

More information

G. R. Pasha Department of Statistics Bahauddin Zakariya University Multan, Pakistan

G. R. Pasha Department of Statistics Bahauddin Zakariya University Multan, Pakistan Deviatio of the Variaces of Classical Estimators ad Negative Iteger Momet Estimator from Miimum Variace Boud with Referece to Maxwell Distributio G. R. Pasha Departmet of Statistics Bahauddi Zakariya Uiversity

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Stat 421-SP2012 Interval Estimation Section

Stat 421-SP2012 Interval Estimation Section Stat 41-SP01 Iterval Estimatio Sectio 11.1-11. We ow uderstad (Chapter 10) how to fid poit estimators of a ukow parameter. o However, a poit estimate does ot provide ay iformatio about the ucertaity (possible

More information

Notes On Median and Quantile Regression. James L. Powell Department of Economics University of California, Berkeley

Notes On Median and Quantile Regression. James L. Powell Department of Economics University of California, Berkeley Notes O Media ad Quatile Regressio James L. Powell Departmet of Ecoomics Uiversity of Califoria, Berkeley Coditioal Media Restrictios ad Least Absolute Deviatios It is well-kow that the expected value

More information

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator Slide Set 13 Liear Model with Edogeous Regressors ad the GMM estimator Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Friday

More information

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013 MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013 Fuctioal Law of Large Numbers. Costructio of the Wieer Measure Cotet. 1. Additioal techical results o weak covergece

More information

Expectation and Variance of a random variable

Expectation and Variance of a random variable Chapter 11 Expectatio ad Variace of a radom variable The aim of this lecture is to defie ad itroduce mathematical Expectatio ad variace of a fuctio of discrete & cotiuous radom variables ad the distributio

More information

LECTURE 8: ASYMPTOTICS I

LECTURE 8: ASYMPTOTICS I LECTURE 8: ASYMPTOTICS I We are iterested i the properties of estimators as. Cosider a sequece of radom variables {, X 1}. N. M. Kiefer, Corell Uiversity, Ecoomics 60 1 Defiitio: (Weak covergece) A sequece

More information

Inference about the slope in linear regression: an empirical likelihood approach

Inference about the slope in linear regression: an empirical likelihood approach Aals of the Istitute of Statistical Mathematics mauscript No. (will be iserted by the editor) Iferece about the slope i liear regressio: a empirical likelihood approach Ursula U. Müller Haxiag Peg Ato

More information

A Distributional Approach Using Propensity Scores

A Distributional Approach Using Propensity Scores A Distributioal Approach Usig Propesity Scores Zhiqiag Ta Departmet of Biostatistics Johs Hopkis School of Public Health http://www.biostat.jhsph.edu/ zta Jue 20, 2005 Outlie Itroductio Couterfactual framework

More information

arxiv: v1 [math.pr] 4 Dec 2013

arxiv: v1 [math.pr] 4 Dec 2013 Squared-Norm Empirical Process i Baach Space arxiv:32005v [mathpr] 4 Dec 203 Vicet Q Vu Departmet of Statistics The Ohio State Uiversity Columbus, OH vqv@statosuedu Abstract Jig Lei Departmet of Statistics

More information

Introductory statistics

Introductory statistics CM9S: Machie Learig for Bioiformatics Lecture - 03/3/06 Itroductory statistics Lecturer: Sriram Sakararama Scribe: Sriram Sakararama We will provide a overview of statistical iferece focussig o the key

More information

An Empirical Likelihood Approach To Goodness of Fit Testing

An Empirical Likelihood Approach To Goodness of Fit Testing Submitted to the Beroulli A Empirical Likelihood Approach To Goodess of Fit Testig HANXIANG PENG ad ANTON SCHICK Idiaa Uiversity Purdue Uiversity at Idiaapolis, Departmet of Mathematical Scieces, Idiaapolis,

More information

Statistical Theory MT 2009 Problems 1: Solution sketches

Statistical Theory MT 2009 Problems 1: Solution sketches Statistical Theory MT 009 Problems : Solutio sketches. Which of the followig desities are withi a expoetial family? Explai your reasoig. (a) Let 0 < θ < ad put f(x, θ) = ( θ)θ x ; x = 0,,,... (b) (c) where

More information

The random version of Dvoretzky s theorem in l n

The random version of Dvoretzky s theorem in l n The radom versio of Dvoretzky s theorem i l Gideo Schechtma Abstract We show that with high probability a sectio of the l ball of dimesio k cε log c > 0 a uiversal costat) is ε close to a multiple of the

More information

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED MATH 47 / SPRING 013 ASSIGNMENT : DUE FEBRUARY 4 FINALIZED Please iclude a cover sheet that provides a complete setece aswer to each the followig three questios: (a) I your opiio, what were the mai ideas

More information

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio

More information

Properties of Point Estimators and Methods of Estimation

Properties of Point Estimators and Methods of Estimation CHAPTER 9 Properties of Poit Estimators ad Methods of Estimatio 9.1 Itroductio 9. Relative Efficiecy 9.3 Cosistecy 9.4 Sufficiecy 9.5 The Rao Blackwell Theorem ad Miimum-Variace Ubiased Estimatio 9.6 The

More information

Asymptotic distribution of products of sums of independent random variables

Asymptotic distribution of products of sums of independent random variables Proc. Idia Acad. Sci. Math. Sci. Vol. 3, No., May 03, pp. 83 9. c Idia Academy of Scieces Asymptotic distributio of products of sums of idepedet radom variables YANLING WANG, SUXIA YAO ad HONGXIA DU ollege

More information

4.1 Data processing inequality

4.1 Data processing inequality ECE598: Iformatio-theoretic methods i high-dimesioal statistics Sprig 206 Lecture 4: Total variatio/iequalities betwee f-divergeces Lecturer: Yihog Wu Scribe: Matthew Tsao, Feb 8, 206 [Ed. Mar 22] Recall

More information

Gamma Distribution and Gamma Approximation

Gamma Distribution and Gamma Approximation Gamma Distributio ad Gamma Approimatio Xiaomig Zeg a Fuhua (Frak Cheg b a Xiame Uiversity, Xiame 365, Chia mzeg@jigia.mu.edu.c b Uiversity of Ketucky, Leigto, Ketucky 456-46, USA cheg@cs.uky.edu Abstract

More information

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 9. POINT ESTIMATION 9. Covergece i Probability. The bases of poit estimatio have already bee laid out i previous chapters. I chapter 5

More information

Precise Rates in Complete Moment Convergence for Negatively Associated Sequences

Precise Rates in Complete Moment Convergence for Negatively Associated Sequences Commuicatios of the Korea Statistical Society 29, Vol. 16, No. 5, 841 849 Precise Rates i Complete Momet Covergece for Negatively Associated Sequeces Dae-Hee Ryu 1,a a Departmet of Computer Sciece, ChugWoo

More information

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i

More information

Kernel density estimator

Kernel density estimator Jauary, 07 NONPARAMETRIC ERNEL DENSITY ESTIMATION I this lecture, we discuss kerel estimatio of probability desity fuctios PDF Noparametric desity estimatio is oe of the cetral problems i statistics I

More information

Self-normalized deviation inequalities with application to t-statistic

Self-normalized deviation inequalities with application to t-statistic Self-ormalized deviatio iequalities with applicatio to t-statistic Xiequa Fa Ceter for Applied Mathematics, Tiaji Uiversity, 30007 Tiaji, Chia Abstract Let ξ i i 1 be a sequece of idepedet ad symmetric

More information

Section 14. Simple linear regression.

Section 14. Simple linear regression. Sectio 14 Simple liear regressio. Let us look at the cigarette dataset from [1] (available to dowload from joural s website) ad []. The cigarette dataset cotais measuremets of tar, icotie, weight ad carbo

More information

Machine Learning Brett Bernstein

Machine Learning Brett Bernstein Machie Learig Brett Berstei Week Lecture: Cocept Check Exercises Starred problems are optioal. Statistical Learig Theory. Suppose A = Y = R ad X is some other set. Furthermore, assume P X Y is a discrete

More information

CS284A: Representations and Algorithms in Molecular Biology

CS284A: Representations and Algorithms in Molecular Biology CS284A: Represetatios ad Algorithms i Molecular Biology Scribe Notes o Lectures 3 & 4: Motif Discovery via Eumeratio & Motif Represetatio Usig Positio Weight Matrix Joshua Gervi Based o presetatios by

More information

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara Poit Estimator Eco 325 Notes o Poit Estimator ad Cofidece Iterval 1 By Hiro Kasahara Parameter, Estimator, ad Estimate The ormal probability desity fuctio is fully characterized by two costats: populatio

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Some Basic Cocepts of Statistical Iferece (Sec 5.) Suppose we have a rv X that has a pdf/pmf deoted by f(x; θ) or p(x; θ), where θ is called the parameter. I previous lectures, we focus o probability problems

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Lecture Stat Maximum Likelihood Estimation

Lecture Stat Maximum Likelihood Estimation Lecture Stat 461-561 Maximum Likelihood Estimatio A.D. Jauary 2008 A.D. () Jauary 2008 1 / 63 Maximum Likelihood Estimatio Ivariace Cosistecy E ciecy Nuisace Parameters A.D. () Jauary 2008 2 / 63 Parametric

More information

1.010 Uncertainty in Engineering Fall 2008

1.010 Uncertainty in Engineering Fall 2008 MIT OpeCourseWare http://ocw.mit.edu.00 Ucertaity i Egieerig Fall 2008 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu.terms. .00 - Brief Notes # 9 Poit ad Iterval

More information

Introduction to Extreme Value Theory Laurens de Haan, ISM Japan, Erasmus University Rotterdam, NL University of Lisbon, PT

Introduction to Extreme Value Theory Laurens de Haan, ISM Japan, Erasmus University Rotterdam, NL University of Lisbon, PT Itroductio to Extreme Value Theory Laures de Haa, ISM Japa, 202 Itroductio to Extreme Value Theory Laures de Haa Erasmus Uiversity Rotterdam, NL Uiversity of Lisbo, PT Itroductio to Extreme Value Theory

More information

Statistical Theory MT 2008 Problems 1: Solution sketches

Statistical Theory MT 2008 Problems 1: Solution sketches Statistical Theory MT 008 Problems : Solutio sketches. Which of the followig desities are withi a expoetial family? Explai your reasoig. a) Let 0 < θ < ad put fx, θ) = θ)θ x ; x = 0,,,... b) c) where α

More information

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes. Term Test October 3, 003 Name Math 56 Studet Number Directio: This test is worth 50 poits. You are required to complete this test withi 50 miutes. I order to receive full credit, aswer each problem completely

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Machine Learning Theory (CS 6783)

Machine Learning Theory (CS 6783) Machie Learig Theory (CS 6783) Lecture 2 : Learig Frameworks, Examples Settig up learig problems. X : istace space or iput space Examples: Computer Visio: Raw M N image vectorized X = 0, 255 M N, SIFT

More information

1 The Haar functions and the Brownian motion

1 The Haar functions and the Brownian motion 1 The Haar fuctios ad the Browia motio 1.1 The Haar fuctios ad their completeess The Haar fuctios The basic Haar fuctio is 1 if x < 1/2, ψx) = 1 if 1/2 x < 1, otherwise. 1.1) It has mea zero 1 ψx)dx =,

More information

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2. SAMPLE STATISTICS A radom sample x 1,x,,x from a distributio f(x) is a set of idepedetly ad idetically variables with x i f(x) for all i Their joit pdf is f(x 1,x,,x )=f(x 1 )f(x ) f(x )= f(x i ) The sample

More information

Monte Carlo Integration

Monte Carlo Integration Mote Carlo Itegratio I these otes we first review basic umerical itegratio methods (usig Riema approximatio ad the trapezoidal rule) ad their limitatios for evaluatig multidimesioal itegrals. Next we itroduce

More information

Lecture 10 October Minimaxity and least favorable prior sequences

Lecture 10 October Minimaxity and least favorable prior sequences STATS 300A: Theory of Statistics Fall 205 Lecture 0 October 22 Lecturer: Lester Mackey Scribe: Brya He, Rahul Makhijai Warig: These otes may cotai factual ad/or typographic errors. 0. Miimaxity ad least

More information

1 = δ2 (0, ), Y Y n nδ. , T n = Y Y n n. ( U n,k + X ) ( f U n,k + Y ) n 2n f U n,k + θ Y ) 2 E X1 2 X1

1 = δ2 (0, ), Y Y n nδ. , T n = Y Y n n. ( U n,k + X ) ( f U n,k + Y ) n 2n f U n,k + θ Y ) 2 E X1 2 X1 8. The cetral limit theorems 8.1. The cetral limit theorem for i.i.d. sequeces. ecall that C ( is N -separatig. Theorem 8.1. Let X 1, X,... be i.i.d. radom variables with EX 1 = ad EX 1 = σ (,. Suppose

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information