ON LOCAL LINEAR ESTIMATION IN NONPARAMETRIC ERRORS-IN-VARIABLES MODELS 1

Size: px
Start display at page:

Download "ON LOCAL LINEAR ESTIMATION IN NONPARAMETRIC ERRORS-IN-VARIABLES MODELS 1"

Transcription

1 Teory of Stocastic Processes Vol2 28, o3-4, 2006, pp*-* SILVELYN ZWANZIG ON LOCAL LINEAR ESTIMATION IN NONPARAMETRIC ERRORS-IN-VARIABLES MODELS Local liear metods are applied to a oparametric regressio model wit ormal errors i te variables ad uiform distributio of te variables Te local eigborood is determied wit elp of decovolutio kerels Two differet liear estimatio metod are used: te aive estimator ad te total least squares estimator Bot local liear estimators are cosistet But oly te local aive estimator delivers a estimatio of te taget Itroductio Errors-i-variables models are essetially more complicated as ordiary regressio models Te desig poits are observed wit errors oly, suc tat te models iclude a icreasig umber of uisace parameters Neverteless it is wated to estimate a regressio fuctio belogig to some smootess class Fa ad Truog 993 applied te decovolutio tecique of desity estimatio to oparametric regressio wit errors i variables Te mai idea was to use kerel estimators wit decovolutio kerels I ordiary regressio a metod of bias reducig is te local polyomial regressio, see [] Local liear regressio as two mai aspects - Aroud te wated x a local eigborood is defied - Te regressio fuctio is approximated by its taget at x Tis taget is estimated by usual metods wit observatios comig from te eigborood Te defiitio of te eigborood of x is ot trivial i errors-i-variables models, because observatios of desig poits ear te wated x ca come from a desig poit lyig far away Ivited lecture 2000 Matematics Subject Classificatios: 62G08, 62G05, 62G20 Key words ad prases: error i variable, measuremet errors, oparametric regressio, decovolutio, kerel metods, local liear regressio, aive estimatio, total least squares

2 2 SILVELYN ZWANZIG J Staudemayer ad D Ruppert 2004 applied te local polyomial approac to errors-i-variables models Tey derived asymptotic results for decreasig error variaces Te local eigborood is described by weigts basig o ordiary kerels at te observed variables Te taget or te best polyomial was estimated by a weigted aive estimator I tis paper we are iterested i cosistet local liear regressio, were te cosistecy is based o icreasig sample size Ufortuately te best possible rates are slow For ormal errors of te variables te best possible rate is O P log I tis paper we describe te local eviromet by decovolutio kerels Tat is a adjustmet of te errors i variables As estimatio metods for te taget two differet estimators from te teory of te liear errors-ivariables model are take: a weigted liear aive estimator ad a weigted total least squares estimator Te uweigted aive estimator as a bias i usual liear errors-i-variables models Te uweigted total least squares estimator is cosistet ad is te maximum likeliood estimator i liear errors-i-variables models wit ormal error distributios Te mai result is tat bot procedures deliver cosistet estimators, but oly te weigted aive estimator is a estimator of te taget Te weigted total least squares estimator estimates sometig else Te limit value of te slope is derived A euristic iterpretatio of tis effect may be, tat te weigts basig o te decovolutio kerels ad te total least squares estimatio priciple are two idepedet adjustmets for te same tig 2 Te model ad metods Let te observatios x, y, x i, y i, x, y be idepedetly distributed, geerated by y i = g ξ i + ε i x i = ξ i + δ i 2 Te error variables ε i, δ i, i =,, are mutually idepedet wit expectatio zero ad bouded fourt momets, wit V ar ε i = σ 2 ad V ar ε 2 i µ 4 ε Te errors i te errors-i-variables equatio 2 are ormal distributed δ i N0, σ 2 iid, σ 2 > 0 3 Te uobserved desig poits come from a uiform distributio ξ i [0, ], iid U[0, ] 4 ad ξ i, ε i, δ i are mutually idepedet fuctio g C[0,] 2 wit We assume a smoot regressio

3 LOCAL LINEAR ESTIMATION 3 g 2 x g 2 x + δ Lδ 5 Te aim is to estimate gx for a give x 0, Furter we itroduce kerel fuctios K wit Kudu =, Ku = K u, Ku 2 du = µ K < 6 ad wit compact supported Fourier trasform Φ K t = expitukudu, Φ K t = 0 for t < a, t > b 7 Furtermore we assume tat te secod derivative of te kerel exists ad tat for some costats µ K, µ K, c K, c K K u 2 du = µ K < ; K u 2 du = µ K <, 8 u 4 Ku c K, u 4 K u c K 9 Examples for kerels fulfillig tese coditios 6-9 are K 2 u = 48 cosu 5 44 siu 2 5 ad K πu 4 u 2 πu 5 u 2 3 u = 3 8π si u 4 4 Note u 4 te sic kerel K u = siu does ot fulfill te coditio 9 π u We set te badwidt = Clog, C sufficietly large Tat is te optimal badwidt i te model - 3, see [2] I ordiary local liear regressio commo local weigts are w i x = K ξ i x K 0 ξ i x I errors-i-variables models te desig poits ξ i are ukow, tat is wy we ave cose wi x = K xi x K xi, x were K is te decovolutio kerel of K defied by K u = exp itu + t2 σ2 Φ 2 2 K t dt 2 Note, tat te domiators i 0 ad are positive wit icreasig probability, because tey are cosistet desity estimators, compare also 22

4 4 SILVELYN ZWANZIG For weigted meas ad weigted quadratic forms wit weigts 0 we use a aalog otatio as i [3]: x w, ξ w, m ξξ, m yx,, m gg If te weigts are used, te we write x w, ξ w, m ξξ, m yx,, m gg Tus x w = w i xx i, x w = w i xx i ad m xy = w i xx i x w y i y w ad m xy = w i xx i x wy i y w ad so o Deote te local liear approximatio of gξ by tξ = β 0 + β ξ x Te te local aive estimator is defied as ĝ aive x = y w m xy x m w x 3 xx Note tat ĝ aive x = t aive 0 = β 0,aive, were β 0,aive, β,aive T = arg mi β 0,β wi x y i tx i 2 Te local total least squares estimator is defied for m xy 0 as ĝ tls x = y w β,tls x w x 4 wit β,tls = m yy m xx + m yy m xx m xy 5 2m xy Note tat ĝ tls x = t tls 0 = β 0,tls, were β 0,tls, β,tls T = arg mi β 0,β w i x mi ξ [ yi tξ 2 + x i ξ 2] 3 Mai result I tis sectio te mai teorems are proved Te proofs are based o auxiliary results sow i Sectio 4 Te followig teorem states te cosistece of te local aive estimator Furter it is sow tat te aive estimator is really a estimator of te taget Teorem I te model - 5 ad for kerels wit 6-9 it olds β,aive = g x + O P log 2 ĝ aive x = gx + O P log Proof

5 Usig 37 ad 38 we get β,aive = m xy m xx LOCAL LINEAR ESTIMATION 5 = σ2 g x σ 2 + O p = g x + O p 2 Remember 3 Applyig 35, 36 ad tat β,aive is stocastically bouded, we obtai te statemet Te followig teorem delivers te results for te local total least squares estimator Te estimator is cosistet but does ot yield a cosistet of te taget Teorem I te model - 3 ad for kerels wit 6-9 it olds for g x 0 β,tls = + + g g x x 2 + O P log 2 ĝ tls x = gx + O P log Proof Itroduce F x for m xy > 0 F x = udefied for m xy = 0 F 2 x for m xy < 0, 6 were F x = x x2 + 4 ad F 2 x = x 2 x2 + 4 Bot fuctios are icreasig F is covex ad F 2 is cocave Te first derivatives are bouded by for all x It olds m β,tls = F yy m xx m xy Cosider z = m yy m xx From 37, 38 ad 39 it follows for g x m xy 0 tat z = z + O p, were z = 2 Deote g x β lim = + + g g x x 2 It olds for g x < 0 tat F z = β lim ad for g x > 0 tat F 2 z = β lim Suppose g x > 0, te from 37 follows tat lim P m xy > 0 = 0 We ave for T > 0 P β,tls β lim > T = P β,tls β lim > T /m xy < 0 +o

6 6 SILVELYN ZWANZIG Furter P F 2 z β lim > T = P F 2 z F 2 z > T = o for T Assume g x < 0, te lim P m xy < 0 = 0 For T > 0 P β,tls β lim > T = P F z F z > T + o Hece lim lim sup P β,tls β lim > T = 0 T 2 Remember 4 Applyig 35, 36 ad tat β,tls is stocastically bouded, we obtai te statemet 3 Auxiliary results I [2] it is sow tat E x K x a ξ a = K Furtermore we ave te followig itegral equatios Lemma Uder x = ξ + δ, δ N0, σ 2 ad for kerels wit 6-9 it olds for all a tat te expectatio wit respect to δ is 2 E x/ξ K x a σ2 x ξ = K ξ a 7 E x/ξ K x a x ξ2 = σ4 K 2 ξ a + σ2 K ξ a 8 Proof Note tat Usig exp it x a K u = it exp itu Φ K t dt 9 = exp it ξ a x ξ exp it ad 2 we get E x/ξ K x a x ξ = exp it ξ a Φ K t Φ t I 2 t dt δ were Φ δ t = exp σ2 t 2 ad I 2 t = u exp i t u ϕ δ u dx wit ϕ δ u = exp u2 Usig te quadratic decompositio we get 2 exp i t u ϕ δ u = Φ δ t ϕ it σ 2 u, 20,σ2

7 LOCAL LINEAR ESTIMATION 7 were ϕ it σ2,σ2 u = σ exp 2 u + it σ2 2σ 2 Tus I 2 t = Φ δ t u ϕ it σ2,σ2 udu = it σ2 t Φ δ Summarizig ad applyig 9 we obtai E x/ξ K x a σ2 x ξ = = σ2 K it exp it ξ a Φ K t dt ξ a 2 Note tat K u = t 2 exp itu Φ K t dt 2 I te same way as above we get E x/ξ K x a x ξ2 = exp it ξ a Φ K t Φ t I 3 t dt, δ wit I 3 t = u 2 exp i t uϕ δ u du Applyig 20 we obtai I 3 t = Φ δ t u 2 ϕ it σ 2 u du =,σ2 Summarizig ad usig 2 we get σ 2 σ 4 t 2 Φ δ t E x/ξ K x a x ξ2 = exp it ξ a t 2 Φ K t σ 2 σ 4 dt = σ4 ξ a ξ a 2 K + σ 2 K Lemma I te model - 5 ad for kerels wit 6-9 ad = C log, C sufficietly large, it olds tat: K xi x = + O P log 22

8 8 SILVELYN ZWANZIG 2 K xi x x i = x + O P log K xi x x 2 i = σ 2 + O P log ξi x K ξ 2 i K xi x y i = K xi x = ξi x K ξi x K gξ i +O P log 25 y 2 i 26 gξi 2 + σ 2 + O P log 2 6 K xi x x i y i = σ 2 g x + O P log 3 2 ξi x K ξ i gξ i 27 Proof Te proofs are based o a sum of iid radom variables of te form S = Z i For V = V ar Z i we ave S = ES + O p V 2 28 Here oly te mai steps are give, but all details are sow i [5] First we cosider S = Z,i wit Z,i = K for te expectatio wit respect to δ i E xi /ξ i Z,i = E xi /ξ i K xi x ξi x = K xi x We ave

9 LOCAL LINEAR ESTIMATION 9 Furter E 2 K xi x VK wit V K = Ku 2 du From te Parseval equality ad 2 it follows tat V K = ΦK t 2 dt = expσ 2 t2 2 Φ K t 2 dt For kerels K wit compactly supported Fourier trasform, 7, we get t2 V K maxexpσ2 t [a,b] Φ 2 K t 2 cost expc Tus for =, V ars = V K cost expc 0 2 l l < cost Te expectatio of S wit respect to all δ i is te ordiary desity estimator of p ξ : ES = p ξ x wit p ξ x = ξi x K Usig te model assumptio 3 we get for more details see te report [5] p ξ x = + O P log 30 2 Here S 2 = Z 2,i wit Z 2,i = K xi x xi From 7 it follows tat E xi /ξ i Z 2,i = σ2 ξi x ξi x K + K ξ i 3 Furter E Z 2,i 2 E 2 K xi x xi c2 ξ i V K Tus usig a similar argumetatio as i 29 we get tat te V ar S 2 < cost Te coditioal expectatio of S 2 ca be iterpreted as a sum of a ordiary estimator of p ξ ad of te expectatio of a ordiary kerel estimator fx = ξi x K x i 32 of fx = x Usig te model assumptio 3 we get for more details see te report [5] for = ES 2 = x + O P 3 + = x + O P log

10 0 SILVELYN ZWANZIG 3 Now we ave a sum of idepedet rv S 3 = Z 3,i wit Z 3,i = x 2 i Applyig 7, 8 ad 3, we obtai E xi /ξ i Z 3,i = K xi x K ξ i x σ 2 + ξ 2 σ 2 i + 2ξi K ξi x + σ4 2 K ξ i x Furter we apply similar argumetatio as above ad use p ξ x = 0 + O P + ad σ 2 K ξi x ξ i = σ 2 + O P We estimate V ar S 3 < cost 4 Ca be sow similarly, see [5] We use E xi /ξ i Z 4,i = K ξ i x g ξi i a similar way as above 5 Ca be sow similarly, see [5] We use E xi /ξ i Z 5,i = K ξ i x g ξi 2 + σ 2 6 Ca be sow similarly, see [5] wit Z 6,i = K xi x yi x i We use E xi /ξ i Z 6,i = E K xi x x i g ξ i ξi x = K ξ i + σ2 ξi x K g ξ i ad tat σ 2 K ξi x g ξ i = g x + O P Now we summarize te results for meas ad quadratic forms wit weigts wi Lemma I te model - 5 ad for kerels wit 6-9 ad = C log, C sufficietly large, it olds tat: m ξξ = O P log 34 x w = x + O P log 35 y w = gx + O P log 36 m xx = σ 2 + O P log 37 m xy = σ 2 g x + O P log 38 m yy = σ 2 + O P log 39

11 LOCAL LINEAR ESTIMATION Proof We decompose m ξξ = w i x ξ i x 2 ξ w x 2 tus mξξ w i x ξ i x 2 Cosider K ξ i x ξi x 2 as S = Z i We get from 9 tat x EZ i 2 0 x x K u u 2 du 2 c K Te variace term is estimated by E K ξ x Remember 30 we get w i x ξ i x 2 = O p 0 x u 2 du = O3 2 ξ x 2 = O 2 Applyig 22 ad 23 we get x w = x + Op 3 + O p = x + Op 3 Aalogously we estimate y w by usig 22 25, 30 y w = K ξ i x g ξi + O p + O p = g w p ξ x + O p = g w + O p Furtermore it olds g w = gx+o p 2 +, compare for istace Teorem 4 i [5] Tus y w = gx + O p 4 Usig te decompositio m xx = wi x 2 i x w 2 Because of 22, 24 it olds tat wi x 2 i = K xi x + O p x 2 i = σ 2 + ξi x K ξ 2 i + O p Applyig 30 we obtai w i x 2 i = w i ξ 2 i σ 2 + O p Tus m xx = m ξξ σ 2 +O p Te te statemet 37 follows from 34 5 Usig te decompositio m xy = wi x i y i y w x w ad 22, 27, ad 30 we get wi x i y i = ξi x K ξ + O p i gξ i σ 2 g x + O p = w i ξ i gξ i σ 2 g x + O p

12 2 SILVELYN ZWANZIG Uder 5 we ca sow compare Teorem 4 i [5] tat m ξg = g xm ξξ + O p 2 + Te 38 follows from 34, 35, ad 36 6 We cosider ow m yy = w i y 2 i y w 2 From 22 ad 26 we get Tus by 30 w i y 2 i = ξi x K gξ i 2 + σ O p wi yi 2 = w i gξ i 2 + σ 2 + O p Uder 5 we ca sow compare Teorem 4 i [5] tat m gg = g x 2 m ξξ + O p Te 39 follows from 34, 35 ad 36 Bibliograpy J Fa ad I Gjibels 996, Local Polyomial Modelig ad Its Applicatio, Lodo: Capma ad Hall 2 J Fa ad Y Truog 993, Noparametric regressio wit errors i variables A Statist WA Fuller 987, Measuremet Errors Models Wiley, New York 4 J Staudemayer ad D Ruppert 2004, Local polyomial regressio ad simulatio extrapolatio JRSocB 66, Part, S Zwazig 2006, O local liear estimatio i oparametric errors-ivariables models UUDM Report Uppsala Uiversity Departmet of Matematics, Uppsala Uiversity, SE-7506 Uppsala, Box 480 Swede zwazig@matuuse

Nonparametric regression: minimax upper and lower bounds

Nonparametric regression: minimax upper and lower bounds Capter 4 Noparametric regressio: miimax upper ad lower bouds 4. Itroductio We cosider oe of te two te most classical o-parametric problems i tis example: estimatig a regressio fuctio o a subset of te real

More information

LECTURE 2 LEAST SQUARES CROSS-VALIDATION FOR KERNEL DENSITY ESTIMATION

LECTURE 2 LEAST SQUARES CROSS-VALIDATION FOR KERNEL DENSITY ESTIMATION Jauary 3 07 LECTURE LEAST SQUARES CROSS-VALIDATION FOR ERNEL DENSITY ESTIMATION Noparametric kerel estimatio is extremely sesitive to te coice of badwidt as larger values of result i averagig over more

More information

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic

More information

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.

Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f. Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,

More information

4 Conditional Distribution Estimation

4 Conditional Distribution Estimation 4 Coditioal Distributio Estimatio 4. Estimators Te coditioal distributio (CDF) of y i give X i = x is F (y j x) = P (y i y j X i = x) = E ( (y i y) j X i = x) : Tis is te coditioal mea of te radom variable

More information

LECTURE 8: ASYMPTOTICS I

LECTURE 8: ASYMPTOTICS I LECTURE 8: ASYMPTOTICS I We are iterested i the properties of estimators as. Cosider a sequece of radom variables {, X 1}. N. M. Kiefer, Corell Uiversity, Ecoomics 60 1 Defiitio: (Weak covergece) A sequece

More information

Lecture 9: Regression: Regressogram and Kernel Regression

Lecture 9: Regression: Regressogram and Kernel Regression STAT 425: Itroductio to Noparametric Statistics Witer 208 Lecture 9: Regressio: Regressogram ad erel Regressio Istructor: Ye-Ci Ce Referece: Capter 5 of All of oparametric statistics 9 Itroductio Let X,

More information

Kernel density estimator

Kernel density estimator Jauary, 07 NONPARAMETRIC ERNEL DENSITY ESTIMATION I this lecture, we discuss kerel estimatio of probability desity fuctios PDF Noparametric desity estimatio is oe of the cetral problems i statistics I

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Uniformly Consistency of the Cauchy-Transformation Kernel Density Estimation Underlying Strong Mixing

Uniformly Consistency of the Cauchy-Transformation Kernel Density Estimation Underlying Strong Mixing Appl. Mat. If. Sci. 7, No. L, 5-9 (203) 5 Applied Matematics & Iformatio Scieces A Iteratioal Joural c 203 NSP Natural Scieces Publisig Cor. Uiformly Cosistecy of te Caucy-Trasformatio Kerel Desity Estimatio

More information

Notes On Nonparametric Density Estimation. James L. Powell Department of Economics University of California, Berkeley

Notes On Nonparametric Density Estimation. James L. Powell Department of Economics University of California, Berkeley Notes O Noparametric Desity Estimatio James L Powell Departmet of Ecoomics Uiversity of Califoria, Berkeley Uivariate Desity Estimatio via Numerical Derivatives Cosider te problem of estimatig te desity

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise

First Year Quantitative Comp Exam Spring, Part I - 203A. f X (x) = 0 otherwise First Year Quatitative Comp Exam Sprig, 2012 Istructio: There are three parts. Aswer every questio i every part. Questio I-1 Part I - 203A A radom variable X is distributed with the margial desity: >

More information

Lecture 15: Density estimation

Lecture 15: Density estimation Lecture 15: Desity estimatio Why do we estimate a desity? Suppose that X 1,...,X are i.i.d. radom variables from F ad that F is ukow but has a Lebesgue p.d.f. f. Estimatio of F ca be doe by estimatig f.

More information

Semiparametric Mixtures of Nonparametric Regressions

Semiparametric Mixtures of Nonparametric Regressions Semiparametric Mixtures of Noparametric Regressios Sijia Xiag, Weixi Yao Proofs I tis sectio, te coditios required by Teorems, 2, 3 ad 4 are listed. Tey are ot te weakest sufficiet coditios, but could

More information

Lecture 33: Bootstrap

Lecture 33: Bootstrap Lecture 33: ootstrap Motivatio To evaluate ad compare differet estimators, we eed cosistet estimators of variaces or asymptotic variaces of estimators. This is also importat for hypothesis testig ad cofidece

More information

Regression with an Evaporating Logarithmic Trend

Regression with an Evaporating Logarithmic Trend Regressio with a Evaporatig Logarithmic Tred Peter C. B. Phillips Cowles Foudatio, Yale Uiversity, Uiversity of Aucklad & Uiversity of York ad Yixiao Su Departmet of Ecoomics Yale Uiversity October 5,

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

This section is optional.

This section is optional. 4 Momet Geeratig Fuctios* This sectio is optioal. The momet geeratig fuctio g : R R of a radom variable X is defied as g(t) = E[e tx ]. Propositio 1. We have g () (0) = E[X ] for = 1, 2,... Proof. Therefore

More information

Estimation of the essential supremum of a regression function

Estimation of the essential supremum of a regression function Estimatio of the essetial supremum of a regressio fuctio Michael ohler, Adam rzyżak 2, ad Harro Walk 3 Fachbereich Mathematik, Techische Uiversität Darmstadt, Schlossgartestr. 7, 64289 Darmstadt, Germay,

More information

Matrix Representation of Data in Experiment

Matrix Representation of Data in Experiment Matrix Represetatio of Data i Experimet Cosider a very simple model for resposes y ij : y ij i ij, i 1,; j 1,,..., (ote that for simplicity we are assumig the two () groups are of equal sample size ) Y

More information

CONCENTRATION INEQUALITIES

CONCENTRATION INEQUALITIES CONCENTRATION INEQUALITIES MAXIM RAGINSKY I te previous lecture, te followig result was stated witout proof. If X 1,..., X are idepedet Beroulliθ radom variables represetig te outcomes of a sequece of

More information

Bias reduction in local polynomial regression estimation via global Lipschitz conditions

Bias reduction in local polynomial regression estimation via global Lipschitz conditions Bias reductio i local polyomial regressio estimatio via global Lipscitz coditios Na Kyeog Lee Departmet of Ecoomics Te Uiversity of Colorado Boulder, CO 839-256 USA email: a.lee@colorado.edu Voice: + 62

More information

Stability analysis of numerical methods for stochastic systems with additive noise

Stability analysis of numerical methods for stochastic systems with additive noise Stability aalysis of umerical metods for stoctic systems wit additive oise Yosiiro SAITO Abstract Stoctic differetial equatios (SDEs) represet pysical peomea domiated by stoctic processes As for determiistic

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Model-based Variance Estimation for Systematic Sampling

Model-based Variance Estimation for Systematic Sampling ASA Sectio o Survey Researc Metods Model-based Variace Estimatio for Systematic Samplig Xiaoxi Li ad J D Opsomer Ceter for Survey Statistics ad Metodology, ad Departmet of Statistics, Iowa State Uiversity,

More information

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization

ECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization ECE 90 Lecture 4: Maximum Likelihood Estimatio ad Complexity Regularizatio R Nowak 5/7/009 Review : Maximum Likelihood Estimatio We have iid observatios draw from a ukow distributio Y i iid p θ, i,, where

More information

Study the bias (due to the nite dimensional approximation) and variance of the estimators

Study the bias (due to the nite dimensional approximation) and variance of the estimators 2 Series Methods 2. Geeral Approach A model has parameters (; ) where is ite-dimesioal ad is oparametric. (Sometimes, there is o :) We will focus o regressio. The fuctio is approximated by a series a ite

More information

AMS570 Lecture Notes #2

AMS570 Lecture Notes #2 AMS570 Lecture Notes # Review of Probability (cotiued) Probability distributios. () Biomial distributio Biomial Experimet: ) It cosists of trials ) Each trial results i of possible outcomes, S or F 3)

More information

Maximum Likelihood Estimation and Complexity Regularization

Maximum Likelihood Estimation and Complexity Regularization ECE90 Sprig 004 Statistical Regularizatio ad Learig Theory Lecture: 4 Maximum Likelihood Estimatio ad Complexity Regularizatio Lecturer: Rob Nowak Scribe: Pam Limpiti Review : Maximum Likelihood Estimatio

More information

1 Introduction to reducing variance in Monte Carlo simulations

1 Introduction to reducing variance in Monte Carlo simulations Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 3 9/11/2013. Large deviations Theory. Cramér s Theorem

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 3 9/11/2013. Large deviations Theory. Cramér s Theorem MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/5.070J Fall 203 Lecture 3 9//203 Large deviatios Theory. Cramér s Theorem Cotet.. Cramér s Theorem. 2. Rate fuctio ad properties. 3. Chage of measure techique.

More information

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015

ECE 8527: Introduction to Machine Learning and Pattern Recognition Midterm # 1. Vaishali Amin Fall, 2015 ECE 8527: Itroductio to Machie Learig ad Patter Recogitio Midterm # 1 Vaishali Ami Fall, 2015 tue39624@temple.edu Problem No. 1: Cosider a two-class discrete distributio problem: ω 1 :{[0,0], [2,0], [2,2],

More information

Mathematical Statistics - MS

Mathematical Statistics - MS Paper Specific Istructios. The examiatio is of hours duratio. There are a total of 60 questios carryig 00 marks. The etire paper is divided ito three sectios, A, B ad C. All sectios are compulsory. Questios

More information

Lecture 11 and 12: Basic estimation theory

Lecture 11 and 12: Basic estimation theory Lecture ad 2: Basic estimatio theory Sprig 202 - EE 94 Networked estimatio ad cotrol Prof. Kha March 2 202 I. MAXIMUM-LIKELIHOOD ESTIMATORS The maximum likelihood priciple is deceptively simple. Louis

More information

Estimation for Complete Data

Estimation for Complete Data Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of

More information

Lecture 7: Properties of Random Samples

Lecture 7: Properties of Random Samples Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ

More information

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i

More information

arxiv: v1 [math.pr] 13 Oct 2011

arxiv: v1 [math.pr] 13 Oct 2011 A tail iequality for quadratic forms of subgaussia radom vectors Daiel Hsu, Sham M. Kakade,, ad Tog Zhag 3 arxiv:0.84v math.pr] 3 Oct 0 Microsoft Research New Eglad Departmet of Statistics, Wharto School,

More information

STA Object Data Analysis - A List of Projects. January 18, 2018

STA Object Data Analysis - A List of Projects. January 18, 2018 STA 6557 Jauary 8, 208 Object Data Aalysis - A List of Projects. Schoeberg Mea glaucomatous shape chages of the Optic Nerve Head regio i aimal models 2. Aalysis of VW- Kedall ati-mea shapes with a applicatio

More information

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS Jauary 25, 207 INTRODUCTION TO MATHEMATICAL STATISTICS Abstract. A basic itroductio to statistics assumig kowledge of probability theory.. Probability I a typical udergraduate problem i probability, we

More information

Notes 19 : Martingale CLT

Notes 19 : Martingale CLT Notes 9 : Martigale CLT Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces: [Bil95, Chapter 35], [Roc, Chapter 3]. Sice we have ot ecoutered weak covergece i some time, we first recall

More information

Estimating the Population Mean using Stratified Double Ranked Set Sample

Estimating the Population Mean using Stratified Double Ranked Set Sample Estimatig te Populatio Mea usig Stratified Double Raked Set Sample Mamoud Syam * Kamarulzama Ibraim Amer Ibraim Al-Omari Qatar Uiversity Foudatio Program Departmet of Mat ad Computer P.O.Box (7) Doa State

More information

Glivenko-Cantelli Classes

Glivenko-Cantelli Classes CS28B/Stat24B (Sprig 2008 Statistical Learig Theory Lecture: 4 Gliveko-Catelli Classes Lecturer: Peter Bartlett Scribe: Michelle Besi Itroductio This lecture will cover Gliveko-Catelli (GC classes ad itroduce

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam. Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the

More information

Comparison of Minimum Initial Capital with Investment and Non-investment Discrete Time Surplus Processes

Comparison of Minimum Initial Capital with Investment and Non-investment Discrete Time Surplus Processes The 22 d Aual Meetig i Mathematics (AMM 207) Departmet of Mathematics, Faculty of Sciece Chiag Mai Uiversity, Chiag Mai, Thailad Compariso of Miimum Iitial Capital with Ivestmet ad -ivestmet Discrete Time

More information

Self-normalized deviation inequalities with application to t-statistic

Self-normalized deviation inequalities with application to t-statistic Self-ormalized deviatio iequalities with applicatio to t-statistic Xiequa Fa Ceter for Applied Mathematics, Tiaji Uiversity, 30007 Tiaji, Chia Abstract Let ξ i i 1 be a sequece of idepedet ad symmetric

More information

Sieve Estimators: Consistency and Rates of Convergence

Sieve Estimators: Consistency and Rates of Convergence EECS 598: Statistical Learig Theory, Witer 2014 Topic 6 Sieve Estimators: Cosistecy ad Rates of Covergece Lecturer: Clayto Scott Scribe: Julia Katz-Samuels, Brado Oselio, Pi-Yu Che Disclaimer: These otes

More information

Gamma Distribution and Gamma Approximation

Gamma Distribution and Gamma Approximation Gamma Distributio ad Gamma Approimatio Xiaomig Zeg a Fuhua (Frak Cheg b a Xiame Uiversity, Xiame 365, Chia mzeg@jigia.mu.edu.c b Uiversity of Ketucky, Leigto, Ketucky 456-46, USA cheg@cs.uky.edu Abstract

More information

An Adaptive Empirical Likelihood Test For Time Series Models 1

An Adaptive Empirical Likelihood Test For Time Series Models 1 A Adaptive Empirical Likeliood Test For Time Series Models 1 By Sog Xi Ce ad Jiti Gao We exted te adaptive ad rate-optimal test of Horowitz ad Spokoiy (001 for specificatio of parametric regressio models

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Hazard Rate Function Estimation Using Weibull Kernel

Hazard Rate Function Estimation Using Weibull Kernel Ope Joural of Statistics, 4, 4, 65-66 Publised Olie September 4 i SciRes. ttp://www.scirp.org/joural/ojs ttp://d.doi.org/.46/ojs.4.486 Hazard Rate Fuctio Estimatio Usig Weibull Kerel Raid B. Sala, Hazem

More information

Scientific Research of the Institute of Mathematics and Computer Science

Scientific Research of the Institute of Mathematics and Computer Science Scietific Researc of te Istitute of Matematics ad Computer Sciece ON THE TOLERANCE AVERAGING FOR DIFFERENTIAL OPERATORS WITH PERIODIC COEFFICIENTS Jolata Borowska, Łukasz Łaciński 2, Jowita Ryclewska,

More information

MINIMAX RATES OF CONVERGENCE AND OPTIMALITY OF BAYES FACTOR WAVELET REGRESSION ESTIMATORS UNDER POINTWISE RISKS

MINIMAX RATES OF CONVERGENCE AND OPTIMALITY OF BAYES FACTOR WAVELET REGRESSION ESTIMATORS UNDER POINTWISE RISKS Statistica Siica 19 (2009) 1389 1406: Supplemet S1 S17 1 MINIMAX RATES OF ONVERGENE AND OPTIMALITY OF BAYES FATOR WAVELET REGRESSION ESTIMATORS UNDER POINTWISE RISKS Natalia Bochkia ad Theofais Sapatias

More information

Machine Learning Theory (CS 6783)

Machine Learning Theory (CS 6783) Machie Learig Theory (CS 6783) Lecture 2 : Learig Frameworks, Examples Settig up learig problems. X : istace space or iput space Examples: Computer Visio: Raw M N image vectorized X = 0, 255 M N, SIFT

More information

Lecture 7: Channel coding theorem for discrete-time continuous memoryless channel

Lecture 7: Channel coding theorem for discrete-time continuous memoryless channel Lecture 7: Chael codig theorem for discrete-time cotiuous memoryless chael Lectured by Dr. Saif K. Mohammed Scribed by Mirsad Čirkić Iformatio Theory for Wireless Commuicatio ITWC Sprig 202 Let us first

More information

Hauptman and Karle Joint and Conditional Probability Distributions. Robert H. Blessing, HWI/UB Structural Biology Department, January 2003 ( )

Hauptman and Karle Joint and Conditional Probability Distributions. Robert H. Blessing, HWI/UB Structural Biology Department, January 2003 ( ) Hauptma ad Karle Joit ad Coditioal Probability Distributios Robert H Blessig HWI/UB Structural Biology Departmet Jauary 00 ormalized crystal structure factors are defied by E h = F h F h = f a hexp ihi

More information

Solutions to HW Assignment 1

Solutions to HW Assignment 1 Solutios to HW: 1 Course: Theory of Probability II Page: 1 of 6 Uiversity of Texas at Austi Solutios to HW Assigmet 1 Problem 1.1. Let Ω, F, {F } 0, P) be a filtered probability space ad T a stoppig time.

More information

Web-based Supplementary Materials for A Modified Partial Likelihood Score Method for Cox Regression with Covariate Error Under the Internal

Web-based Supplementary Materials for A Modified Partial Likelihood Score Method for Cox Regression with Covariate Error Under the Internal Web-based Supplemetary Materials for A Modified Partial Likelihood Score Method for Cox Regressio with Covariate Error Uder the Iteral Validatio Desig by David M. Zucker, Xi Zhou, Xiaomei Liao, Yi Li,

More information

Rank tests and regression rank scores tests in measurement error models

Rank tests and regression rank scores tests in measurement error models Rak tests ad regressio rak scores tests i measuremet error models J. Jurečková ad A.K.Md.E. Saleh Charles Uiversity i Prague ad Carleto Uiversity i Ottawa Abstract The rak ad regressio rak score tests

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

A survey on penalized empirical risk minimization Sara A. van de Geer

A survey on penalized empirical risk minimization Sara A. van de Geer A survey o pealized empirical risk miimizatio Sara A. va de Geer We address the questio how to choose the pealty i empirical risk miimizatio. Roughly speakig, this pealty should be a good boud for the

More information

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator Slide Set 13 Liear Model with Edogeous Regressors ad the GMM estimator Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Friday

More information

Week 6. Intuition: Let p (x, y) be the joint density of (X, Y ) and p (x) be the marginal density of X. Then. h dy =

Week 6. Intuition: Let p (x, y) be the joint density of (X, Y ) and p (x) be the marginal density of X. Then. h dy = Week 6 Lecture Kerel regressio ad miimax rates Model: Observe Y i = f (X i ) + ξ i were ξ i, i =,,,, iid wit Eξ i = 0 We ofte assume X i are iid or X i = i/ Nadaraya Watso estimator Let w i (x) = K ( X

More information

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2. SAMPLE STATISTICS A radom sample x 1,x,,x from a distributio f(x) is a set of idepedetly ad idetically variables with x i f(x) for all i Their joit pdf is f(x 1,x,,x )=f(x 1 )f(x ) f(x )= f(x i ) The sample

More information

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15 17. Joit distributios of extreme order statistics Lehma 5.1; Ferguso 15 I Example 10., we derived the asymptotic distributio of the maximum from a radom sample from a uiform distributio. We did this usig

More information

Berry-Esseen bounds for self-normalized martingales

Berry-Esseen bounds for self-normalized martingales Berry-Essee bouds for self-ormalized martigales Xiequa Fa a, Qi-Ma Shao b a Ceter for Applied Mathematics, Tiaji Uiversity, Tiaji 30007, Chia b Departmet of Statistics, The Chiese Uiversity of Hog Kog,

More information

Estimation of the Mean and the ACVF

Estimation of the Mean and the ACVF Chapter 5 Estimatio of the Mea ad the ACVF A statioary process {X t } is characterized by its mea ad its autocovariace fuctio γ ), ad so by the autocorrelatio fuctio ρ ) I this chapter we preset the estimators

More information

MAT1026 Calculus II Basic Convergence Tests for Series

MAT1026 Calculus II Basic Convergence Tests for Series MAT026 Calculus II Basic Covergece Tests for Series Egi MERMUT 202.03.08 Dokuz Eylül Uiversity Faculty of Sciece Departmet of Mathematics İzmir/TURKEY Cotets Mootoe Covergece Theorem 2 2 Series of Real

More information

Introduction to Probability. Ariel Yadin

Introduction to Probability. Ariel Yadin Itroductio to robability Ariel Yadi Lecture 2 *** Ja. 7 ***. Covergece of Radom Variables As i the case of sequeces of umbers, we would like to talk about covergece of radom variables. There are may ways

More information

MATHEMATICAL SCIENCES PAPER-II

MATHEMATICAL SCIENCES PAPER-II MATHEMATICAL SCIENCES PAPER-II. Let {x } ad {y } be two sequeces of real umbers. Prove or disprove each of the statemets :. If {x y } coverges, ad if {y } is coverget, the {x } is coverget.. {x + y } coverges

More information

32 estimating the cumulative distribution function

32 estimating the cumulative distribution function 32 estimatig the cumulative distributio fuctio 4.6 types of cofidece itervals/bads Let F be a class of distributio fuctios F ad let θ be some quatity of iterest, such as the mea of F or the whole fuctio

More information

Detailed proofs of Propositions 3.1 and 3.2

Detailed proofs of Propositions 3.1 and 3.2 Detailed proofs of Propositios 3. ad 3. Proof of Propositio 3. NB: itegratio sets are geerally omitted for itegrals defied over a uit hypercube [0, s with ay s d. We first give four lemmas. The proof of

More information

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Discrete Mathematics for CS Spring 2008 David Wagner Note 22 CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 22 I.I.D. Radom Variables Estimatig the bias of a coi Questio: We wat to estimate the proportio p of Democrats i the US populatio, by takig

More information

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 9. POINT ESTIMATION 9. Covergece i Probability. The bases of poit estimatio have already bee laid out i previous chapters. I chapter 5

More information

Computation Of Asymptotic Distribution For Semiparametric GMM Estimators

Computation Of Asymptotic Distribution For Semiparametric GMM Estimators Computatio Of Asymptotic Distributio For Semiparametric GMM Estimators Hideiko Icimura Departmet of Ecoomics Uiversity College Lodo Cemmap UCL ad IFS April 9, 2004 Abstract A set of su ciet coditios for

More information

Introducing a Novel Bivariate Generalized Skew-Symmetric Normal Distribution

Introducing a Novel Bivariate Generalized Skew-Symmetric Normal Distribution Joural of mathematics ad computer Sciece 7 (03) 66-7 Article history: Received April 03 Accepted May 03 Available olie Jue 03 Itroducig a Novel Bivariate Geeralized Skew-Symmetric Normal Distributio Behrouz

More information

Asymptotic distribution of products of sums of independent random variables

Asymptotic distribution of products of sums of independent random variables Proc. Idia Acad. Sci. Math. Sci. Vol. 3, No., May 03, pp. 83 9. c Idia Academy of Scieces Asymptotic distributio of products of sums of idepedet radom variables YANLING WANG, SUXIA YAO ad HONGXIA DU ollege

More information

Rademacher Complexity

Rademacher Complexity EECS 598: Statistical Learig Theory, Witer 204 Topic 0 Rademacher Complexity Lecturer: Clayto Scott Scribe: Ya Deg, Kevi Moo Disclaimer: These otes have ot bee subjected to the usual scrutiy reserved for

More information

Lecture 12: September 27

Lecture 12: September 27 36-705: Itermediate Statistics Fall 207 Lecturer: Siva Balakrisha Lecture 2: September 27 Today we will discuss sufficiecy i more detail ad the begi to discuss some geeral strategies for costructig estimators.

More information

Lecture 3. Properties of Summary Statistics: Sampling Distribution

Lecture 3. Properties of Summary Statistics: Sampling Distribution Lecture 3 Properties of Summary Statistics: Samplig Distributio Mai Theme How ca we use math to justify that our umerical summaries from the sample are good summaries of the populatio? Lecture Summary

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn Stat 366 Lab 2 Solutios (September 2, 2006) page TA: Yury Petracheko, CAB 484, yuryp@ualberta.ca, http://www.ualberta.ca/ yuryp/ Review Questios, Chapters 8, 9 8.5 Suppose that Y, Y 2,..., Y deote a radom

More information

Singular Continuous Measures by Michael Pejic 5/14/10

Singular Continuous Measures by Michael Pejic 5/14/10 Sigular Cotiuous Measures by Michael Peic 5/4/0 Prelimiaries Give a set X, a σ-algebra o X is a collectio of subsets of X that cotais X ad ad is closed uder complemetatio ad coutable uios hece, coutable

More information

Appendix to: Hypothesis Testing for Multiple Mean and Correlation Curves with Functional Data

Appendix to: Hypothesis Testing for Multiple Mean and Correlation Curves with Functional Data Appedix to: Hypothesis Testig for Multiple Mea ad Correlatio Curves with Fuctioal Data Ao Yua 1, Hog-Bi Fag 1, Haiou Li 1, Coli O. Wu, Mig T. Ta 1, 1 Departmet of Biostatistics, Bioiformatics ad Biomathematics,

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

Week 10. f2 j=2 2 j k ; j; k 2 Zg is an orthonormal basis for L 2 (R). This function is called mother wavelet, which can be often constructed

Week 10. f2 j=2 2 j k ; j; k 2 Zg is an orthonormal basis for L 2 (R). This function is called mother wavelet, which can be often constructed Wee 0 A Itroductio to Wavelet regressio. De itio: Wavelet is a fuctio such that f j= j ; j; Zg is a orthoormal basis for L (R). This fuctio is called mother wavelet, which ca be ofte costructed from father

More information

5.1 A mutual information bound based on metric entropy

5.1 A mutual information bound based on metric entropy Chapter 5 Global Fao Method I this chapter, we exted the techiques of Chapter 2.4 o Fao s method the local Fao method) to a more global costructio. I particular, we show that, rather tha costructig a local

More information

A statistical method to determine sample size to estimate characteristic value of soil parameters

A statistical method to determine sample size to estimate characteristic value of soil parameters A statistical method to determie sample size to estimate characteristic value of soil parameters Y. Hojo, B. Setiawa 2 ad M. Suzuki 3 Abstract Sample size is a importat factor to be cosidered i determiig

More information

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator

Economics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters

More information

ON SOME PROPERTIES OF THE PICARD OPERATORS. Lucyna Rempulska and Karolina Tomczak

ON SOME PROPERTIES OF THE PICARD OPERATORS. Lucyna Rempulska and Karolina Tomczak ACHIVUM MATHEMATICUM BNO Tomus 45 9, 5 35 ON SOME POPETIES OF THE PICAD OPEATOS Lucya empulska ad Karolia Tomczak Abstract. We cosider the Picard operators P ad P ;r i expoetial weighted spaces. We give

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit theorems Throughout this sectio we will assume a probability space (Ω, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 6 Sequeces ad Series of Fuctios 6.1. Covergece of a Sequece of Fuctios Poitwise Covergece. Defiitio 6.1. Let, for each N, fuctio f : A R be defied. If, for each x A, the sequece (f (x)) coverges

More information

8.1 Introduction. 8. Nonparametric Inference Using Orthogonal Functions

8.1 Introduction. 8. Nonparametric Inference Using Orthogonal Functions 8. Noparametric Iferece Usig Orthogoal Fuctios 1. Itroductio. Noparametric Regressio 3. Irregular Desigs 4. Desity Estimatio 5. Compariso of Methods 8.1 Itroductio Use a orthogoal basis to covert oparametric

More information

IIT JAM Mathematical Statistics (MS) 2006 SECTION A

IIT JAM Mathematical Statistics (MS) 2006 SECTION A IIT JAM Mathematical Statistics (MS) 6 SECTION A. If a > for ad lim a / L >, the which of the followig series is ot coverget? (a) (b) (c) (d) (d) = = a = a = a a + / a lim a a / + = lim a / a / + = lim

More information

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)]. Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit Theorems Throughout this sectio we will assume a probability space (, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information