Bootstrap confidence intervals in nonparametric regression without an additive model

Size: px
Start display at page:

Download "Bootstrap confidence intervals in nonparametric regression without an additive model"

Transcription

1 Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap is revisited. Wen an additive model olds true, te usual residual bootstrap is available but it often leads to confidence interval under-coverage; te case is made tat tis under-coverage can be partially corrected using predictive as opposed to fitted residuals for resampling. Furtermore, it as been unclear to date if a bootstrap approac is feasible in te absence of an additive model. Te main trust of tis paper is to sow ow te transformation approac put fort by Politis (2010, 2013) in te related setting of prediction intervals can be found useful in order to construct bootstrap confidence intervals witout an additive model. 1 Introduction Consider regression data of te type {(Y t, ), t = 1,..., n}. For simplicity of presentation, te regressor is assumed univariate and deterministic; te case of a multivariate regressor is andled similarly. As usual, it will be assumed tat Y 1,...,Y n are independent but not identically distributed. Attention focuses primarily on te first two moments of te response Y t, namely µ( ) = E(Y t ) and σ 2 ( ) = Var(Y t ). (1) In te nonparametric setting, te functions µ( ) and σ( ) are considered unknown but assumed to possess some degree of smootness (differentiability, etc.). Tere are many approaces towards nonparametric estimation of te functions µ and σ, e.g., wavelets and ortogonal series, smooting splines, local polynomials, and kernel smooters. For concreteness, tis paper will focus on one of te oldest metods, namely te Nadaraya-Watson (N-W) kernel estimators; see Li and Racine (2007) and te references terein. Beyond point estimates of te functions µ and σ, it is important to be able to additionally provide interval estimates in order to ave a measure of teir statistical accuracy. Suppose, for example, tat a practitioner is interested in te expected response to be observed at a future point. A confidence interval for µ( ) is ten desirable. Under regularity conditions, suc a confidence interval can be given eiter via a large-sample Dimitris N. Politis University of California at San Diego, La Jolla, CA , USA, dpolitis@ucsd.edu 1

2 2 Dimitris N. Politis normal approximation, or via a resampling approac; see e.g. Freedman (1981), Härdle and Bowman (1988), Härdle and Marron (1991), Hall (1993), or Neumann and Polzel (1998). Typical regularity conditions for te above bootstrap approaces involve te assumption of an additive model wit respect to independent and identically distributed (i.i.d.) errors. In Section 2, we revisit te usual model-based bootstrap for regression adding te dimension of employing predictive as opposed to fitted residuals as advocated by Politis (2010, 2013) in a related context. More importantly, in Section 3 we address te problem of constructing a bootstrap confidence interval for µ( ) witout an underlying additive model. Te model-free approac developed in tis paper is totally automatic, relieving te practitioner from te need to find an optimal transformation towards additivity and variance stabilization; tis is a significant practical advantage because of te multitude of suc proposed transformations, e.g. te Box/Cox power family, ACE, AVAS, etc. see Linton et al. (1997) and te references terein. Te finite-sample simulations provided in Section 4 confirm te viability and good performance of te model-free confidence intervals. 2 Model-based nonparametric regression 2.1 Nonparametric regression wit an additive model An additive model for nonparametric regression is given by te equation Y t = µ( )+σ( ) ε t, t = 1,..., n, (2) wit ε t i.i.d. (0,1) from an (unknown) distribution F. Te N-W estimator of µ(x) is defined as n ( ) ( ) x xi x xi m x = Y i K wit K = K( x x ) i n k=1 K ( x x ) (3) k were is te bandwidt, and K(x) is a symmetric kernel function wit K(x)dx = 1. Similarly, te N-W estimator of σ 2 (x) is given by s 2 x = M x m 2 x were M x = n Y i 2 K ( x x ) i. For t = 1,..., n, let e t = (Y t m xt )/s xt denote te fitted residuals, and ẽ t = (Y t m (t) )/s (t) te predictive residuals. Here, m x (t) and M x (t) denote te estimators m x and M x respectively computed from te delete-y t dataset: {(Y i, x i ), i = 1,...,t 1 and i = t + 1,..., n}. As before, define s (t) = M x (t) t (m (t) ) 2. Coosing te bandwidt is often done by cross-validation, i.e., picking to minimize t=1 n ẽ2 t, or its L 1 analog: t=1 n ẽ t. 2.2 Model-based confidence intervals Consider te problem of constructing a confidence interval for te regression function µ( ) at a point of interest. A normal approximation to te distribution of te estimator m xf implies an approximate (1 α)100% equal-tailed, confidence interval for µ( ) given by: [m xf + v xf z(α/2), m xf + v xf z(1 α/2)] (4)

3 Bootstrap confidence intervals in nonparametric regression witout an additive model 3 were v 2 = s 2 n K 2 ( x i ) wit K defined in eq. (3), and z(α) being te α quantile of te standard normal. If te density (e.g. istogram) of te design points x 1,..., x n can be tougt to approximate a given functional sape (say, f( )) for large n, ten te large-sample approximation n K 2 ( x f x i K 2 (x)dx ) n f( ) (5) can be used wic relies on te assumption tat K(x)dx = 1; see e.g. Li and Racine (2007). Interval (4) may be problematic in two respects: (a) it ignores te bias of m x, so it must be eiter explicitly bias-corrected, or a suboptimal bandwidt must be used to ensure undersmooting; and (b) it is based on a Central Limit Teorem wic may not be a good finite-sample approximation if te errors are skewed and/or leptokurtic, or wen te sample size is not large enoug. For bot above reasons, practitioners often prefer bootstrap metods over te normal approximation interval (4). Wen using fitted residuals, te following algoritm is te well-known residual bootstrap pioneered by Freedman (1981) in a linear regression setting, and extended to nonparametric regression by Härdle and Bowman (1988), and oter autors. As an alternative, we also propose te use of predictive residuals for resampling as advocated by Politis (2010, 2013) in a related context. Te predictive residuals ave an empirical distribution tat as similar sape as tat of te fitted residuals but it as larger scale. Tis is a finite-sample penomenon only but it may elp alleviate te well-known penomenon of under-coverage of bootstrap confidence intervals. Our goal is to approximate te distribution of te confidence root: µ( ) m xf by tat of its bootstrap counterpart. RESAMPLING ALGORITHM FOR MODEL-BASED CONFIDENCE INTERVALS FOR µ( ) 1. Based on te {(Y t, ),t = 1,..., n} data, construct te estimates m x and s rom wic te fitted residuals e i, and predictive residuals ẽ i are computed for i = 1,..., n. 2. For te traditional model-based bootstrap approac (MB), let r i = e i n 1 j e j, for i = 1,..., n. For te predictive residual approac (PRMB) as in Politis (2010), let r i = ẽ i n 1 j ẽ j, for i = 1,..., n. a. Sample randomly (wit replacement) te residuals r 1,..., r n to create te bootstrap pseudo-residuals r1,..., r n wose empirical distribution is denoted by ˆF n. b. Create pseudo-data in te Y domain by letting Yi = m xi + s xi ri, for i = 1,..., n. c. Based on te pseudo-data {(Yt, ),t = 1,..., n}, re-estimate te functions µ(x) and σ(x) by te kernel estimators m x and s x (wit same kernel and bandwidts as te original estimators m x and s x ). d. Calculate a replicate of te bootstrap confidence root: m xf m. 3. Steps (a) (d) in te above are repeated B times, and te B bootstrap root replicates are collected in te form of an empirical distribution wit α quantile denoted by q(α). 4. Ten, a (1 α)100% equal-tailed confidence interval for µ( ) is given by: [m xf + q(α/2), m xf + q(1 α/2)]. (6) Remark 2.1 As in all nonparametric smooting problems, coosing te bandwidt is often a key issue due to te ever-looming problem of bias; te addition of a bootstrap algoritm as above furter complicates tings. Different autors ave used various tricks to account for te bias. For example, Härdle and Bowman (1988)

4 4 Dimitris N. Politis construct a kernel estimate for te second derivative µ (x), and use tis estimate to explicitly correct for te bias; te estimate of te second derivative is known to be consistent but it is difficult to coose its bandwidt. Härdle and Marron (1991) estimate te (fitted) residuals using te optimal bandwidt but te resampled residuals are ten added to an oversmooted estimate of µ; te bootstrapped data are ten smooted using te optimal bandwidt. Neumann and Polzel (1998) use only one bandwidt but it is of smaller order tan te mean square error optimal rate; tis undersmooting of curve estimates was first proposed by Hall (1993) and is peraps te easiest teoretical solution towards confidence band construction altoug te recommended degree of undersmooting for practical purposes is not obvious. Remark 2.2 An important feature of all bootstrap procedures is tat tey can andle joint confidence intervals, i.e., confidence regions, wit te same ease as te univariate ones. Tis is especially true in regression were simultaneous confidence intervals are typically constructed in te form of confidence bands; te details are well-known in te literature and are omitted due to lack of space. 3 Model-free nonparametric regression 3.1 Nonparametric regression witout an additive model We now revisit te nonparametric regression setup but in a situation were a model suc as eq. (2) can not be considered to old true (not even approximately). As an example of model (2) not being valid, consider te setup were te skewness and/or kurtosis of Y t depends on, and tus centering and studentization will not result in i.i.d. ness. Te dataset is still {(Y t, ), t = 1,..., n} were te regressor is univariate and deterministic, and te variables Y 1,Y 2,... are independent but not identically distributed. Define te conditional distribution D x (y) = P{Y f y = x} were (Y f, ) represents te random response Y f associated wit regressor. Attention still focuses on constructing an interval estimate of µ( ) = E(Y f ) = y D xf (dy). Trougout tis section, we will assume tat te function D x (y) is continuous in bot x and y. Consequently, we can estimate D x (y) by te local (weigted) empirical distribution ˆD x (y) = n ( x xi 1{Y i y} K ) ; (7) tis is just a N-W smooter of te variables 1{Y t y}, t = 1,..., n. Estimator ˆD x (y) enjoys many desirable properties, including asymptotic consistency, but is discontinuous as a function of y. To construct a continuous (and differentiable) estimator, let b be a positive bandwidt parameter and Λ(y) be a (differentiable) distribution function tat is strictly increasing, and define D x (y) = n ( ) y Yi Λ b Under regularity conditions, Li and Racine (2007, Teorem 6.2) sow tat ) K( x xi. (8) Var( D x (y)) = O( 1 n ) and Bias( D x (y)) = O( 2 + b 2 ) (9)

5 Bootstrap confidence intervals in nonparametric regression witout an additive model 5 assuming tat 0, b 0, n and n( 3 + b 3 ) = o(1); to minimize te asymptotic Mean Squared Error of D x (y), te optimal bandwidts are c n 1/5 and b c b n 2/5 for some positive constants c, c b. Recall tat te Y t s are non-i.i.d. only because tey do not ave identical distributions. Since tey are continuous random variables, te probability integral transform is applicable. If we let η i = D xi (Y i ) for i = 1,..., n, ten η 1,..., η n are i.i.d. Uniform(0,1). Of course, D x ( ) is not known but we can define u i = D xi (Y i ) for i = 1,..., n; (10) by te consistency of D x ( ), we can now claim tat u 1,..., u n are approximately i.i.d. Uniform(0,1). Using eq. (10) and following te Model-free Prediction Principle of Politis (2010), te quantity Π xf = n 1 n ˆD x 1 f (u i ) (11) was proposed as an L 2 optimal predictor of Y f, i.e., an approximation to te conditional expectation µ( ) = E(Y f ). Note tat ˆD xf (y) is a step function in y, and tus not invertible; te notation ˆD x 1 f denotes te quantile inverse. Alternatively, one could propose te quantity n 1 n D 1 (u i ) were a true inverse is used; te difference between te two is negligible, and definition (11) is straigtforward. Note tat Π xf is defined as a function of te approximately i.i.d. variables u 1,..., u n ; as suc, it may be amenable to te original i.i.d. bootstrap of Efron (1979). Two questions arise: (a) is te estimator Π xf quite different from te standard N-W estimator m xf? and (b) could m xf itself be bootstrapped using i.i.d. resampling? Te answers to tese questions are NO and YES respectively, due to te following fact. To motivate it, recall tat te N-W estimator m x can be expressed alternatively as n ( ) x xi 1 m x = Y i K = y ˆD x (dy) = ˆD x 1 (u)du. (12) 0 Te last equality in (12) is te identity y F(dy) = 1 0 F 1 (u)du tat olds true for any distribution F. Fact 3.1 Assume tat D x (y) is continuous in x, and differentiable in y wit derivative tat is everywere positive on its support. Ten, Π xf and m xf are asymptotically equivalent, i.e., n (Π xf m xf ) = o p (1) for any tat is not a boundary point. One way to prove te above is to sow tat te average appearing in (11) is close to a Riemann sum approximation to te integral at te RHS of (12) based on a grid of n points. Te law of te iterated logaritm for order statistics of uniform spacings can be useful ere; see Devroye (1981) and te references terein. Remark 3.1 Te above line of arguments indicates tat tere is a variety of estimators tat are asymptotically equivalent to m xf in te sense of Fact 3.1. For example, te Riemann sum M 1 M k=1 ˆD 1 (k/m) is suc an approximation as long as M n. A stocastic approximation can also be concocted as M 1 M 1 ˆD (W i ) were W 1,...,W M are i.i.d. generated from a Uniform(0,1) distribution and M n.

6 6 Dimitris N. Politis 3.2 Bootstrap algoritm for model-free confidence intervals Let ˆµ( ) denote our cosen estimator of µ( ) = E(Y f ), i.e., eiter m xf or Π xf, or even one of te oter asymptotically equivalent estimators discussed in Remark 3.1. Our goal is to approximate te distribution of te confidence root: µ( ) ˆµ( ) by tat of its bootstrap counterpart. Te algoritm reads as follows. RESAMPLING ALGORITHM FOR MODEL-FREE CONFIDENCE INTERVALS FOR µ( ) 1. Based on te {(Y t, ),t = 1,..., n} data, construct te estimates ˆD x ( ) and D x ( ), and use eq. (10) to obtain te transformed data u 1,..., u n tat are approximately i.i.d. Uniform (0,1). a. Sample randomly (wit replacement) te transformed data u 1,..., u n to create bootstrap pseudo-data u 1,..., u n. b. Use te quantile inverse transformation ˆD x 1 to create bootstrap pseudo-data in te Y domain, i.e., let Y n = (Y1,...,Y n) were Yt = ˆD x 1 t (ut ). Note tat Yt is paired wit te original design point; ence, te bootstrap dataset is {(Yt, ),t = 1,..., n}. c. Based on te pseudo-data {(Yt, ),t = 1,..., n}, re-estimate te conditional distribution D x ( ); denote te bootstrap estimates by ˆD x( ) and D x( ). d. Calculate a replicate of te bootstrap confidence root: ˆµ( ) ˆµ ( ) were ˆµ ( ) equals eiter y ˆD (dy) = ˆD (u)du or n 1 n 1 ˆD (u i ) according to weter ˆµ() was cosen as m xf or Π xf. 2. Steps (a) (d) in te above are repeated B times, and te B bootstrap root replicates are collected in te form of an empirical distribution wit α quantile denoted by q(α). 3. Ten, te Model-Free (MF) (1 α)100% equal-tailed, confidence interval for µ( ) is [ ˆµ( )+q(α/2), ˆµ( )+q(1 α/2)]. (13) Remark 3.2 An alternative way to implement step 1(a) of te above algoritm is: a. Generate bootstrap pseudo-data u 1,...,u n i.i.d. from an exact Uniform (0, 1) distribution. If te above coice is made, ten tere is no need to use eq. (10) to obtain te transformed data u 1,...,u n ; in tis sense, te smoot estimator D x ( ) is not needed, and te step function ˆD x ( ) suffices for te algoritm. Te downside to te above proposal is tat te option to use predictive u data is unavailable. To elaborate, recall tat Politis (2010) defined te model-free predictive u data as follows. Let D (t) denote te estimator D xt as computed from te delete-y t dataset, i.e., {(Y i, x i ), i = 1,...,t 1 and i = t + 1,..., n}. Now let u (t) t = D (t) (Y t ) for t = 1,..., n. (14) Te u (t) t variables are te model-free analogs of te predictive residuals ẽ t of Section 2. Remark 3.3 We can now define Predictive Model-Free (PMF) confidence intervals for µ( ). Te PMF Resampling Algoritm is identical to te above wit one exception; replace step 1(a) wit te following: a. Sample randomly (wit replacement) te predictive u data u (1) 1,..., u(n) n to create bootstrap pseudo-data u 1,...,u n.

7 Bootstrap confidence intervals in nonparametric regression witout an additive model 7 Remark 3.4 Recall tat te model-free L 1 optimal predictor of Y f is given by te median{ D 1 (u i )}; see Politis (2010, 2013). Terefore, by analogy to Fact 3.1, we ave: median{ D 1 (u i )} = D 1 (median{u i }) D 1 (1/2) since te u i s are approximately Uniform (0,1). Hence, if te practitioner wanted to estimate te median (as opposed to te mean) of te conditional distribution ofy f given, ten te local median D x 1 f (1/2), could be bootstrapped using i.i.d. resampling in te same manner tat median{ D 1 (u i )} can be bootstrapped. 4 Simulations 4.1 Wen a nonparametric regression model is true Te building block for te simulation in Section 4.1 is model (2) wit µ(x) = sin(x), σ(x) = 1/2, and errors ε t i.i.d. N(0,1) or two-sided exponential (Laplace) rescaled to unit variance. Knowledge tat te variance σ(x) is constant was not used in te estimation, i.e., σ(x) was estimated from te data. For eac distribution, 500 datasets eac of size n = 100 were created wit te design points x 1,..., x n being equi-spaced on (0, 2π), and N-W estimates of µ(x) = E(Y x) and σ 2 (x) = Var(Y x) were computed using a normal kernel in R. Confidence intervals wit nominal level α = 0.90 were constructed using te two metods presented in Section 2.2: Traditional Model-Based (MB) and Predictive Residual Model-Based (PRMB); te two metods presented in Section 3.2: Model-Free (MF) of eq. (13), and Predictive Model-Free (PMF) from Remark 3.3; and te NORMAL approximation interval (4). Te smooting kernel Λ in eq. (8) was taken to be te standard normal density. All required bandwidts were computed by L 1 cross-validation. For eac type of interval, te corresponding empirical coverage level (CVR) and average lengt (LEN) were recorded togeter wit te (empirical) standard error associated wit eac average lengt. Tables 1, 2, 3, and 4, summarize our findings, and contain a number of important features. Te standard error of te reported coverage levels over te 500 replications is By construction, tis simulation problem as some symmetry tat elps us furter appreciate te variability of te CVRs. To elaborate, note tat for any x [0, π] we ave µ(x) = µ(2π x) and te same symmetry olds for te derivatives of µ(x) as well due to te sinusoidal structure. Hence, te expected CVRs sould be te same for = 0.15π and 1.85π in all metods. So for te NORMAL case of Table 1, te CVR would be better estimated by te average of and 0.845, i.e., closer to 0.837; similarly, te PMF CVR for te same points could be better estimated by te average of and 0.878, i.e., Te NORMAL intervals are caracterized by under-coverage even wen te true distribution is Normal. Tis under-coverage is more pronounced wen = π/2 or 3π/2 due to te ig bias of te kernel estimator at te points of a peak or valley tat te normal interval (4) sweeps under te carpet. Te lengt of te NORMAL intervals is quite less variable tan tose based on bootstrap; tis is not surprising since te extra randomization from te bootstrap is expected to inflate te overall variances. Altoug regression model (2) olds true ere, te MB intervals sow pronounced under-coverage; tis is a penomenon well-known in te bootstrap literature. As previously mentioned, te predictive residuals ave generally larger scale tan te fitted ones. Consequently, te PRMB intervals are wider, and manage to partially correct te under-coverage of te MB intervals.

8 8 Dimitris N. Politis MB PRMB MF PMF Normal Table 1 Empirical coverage levels (CVR) of confidence intervals according to different metods at several points spanning te interval (0, 2π). Nominal coverage was 0.90, and sample size n = 100; error distribution: i.i.d. Normal. MB PRMB MF PMF Normal Table 2 (Average) lengts (LEN) wit standard errors below tem of te confidence intervals reported in Table 1. MB PRMB MF PMF Normal Table 3 As in Table 1 but wit error distribution: i.i.d. Laplace. Te performance of MF intervals is better tan tat of MB intervals despite te fact tat te former are constructed witout making use of eq. (2). However, as wit te MB intervals, te MF intervals also sow a tendency towards under-coverage. Te PMF intervals appear to nicely correct te MF under-coverage in te Normal case altoug in te Laplace case tey yield an over-correction. However, even wit tis over-correction, te PMF coverages are closer to te nominal in most entries of Tables 1 and 3 wit only a few exceptions in Table 3 were te PRMB intervals are more accurate. 4.2 Wen a nonparametric regression model is not true In tis subsection, we investigate te performance of different confidence intervals in te absence of model (2). For easy comparison wit Section 4.1, we will keep te same (conditional) mean and variance, i.e.,

9 Bootstrap confidence intervals in nonparametric regression witout an additive model 9 MB PRMB MF PMF Normal Table 4 As in Table 2 but wit error distribution: i.i.d. Laplace. MB PRMB MF PMF Normal Table 5 As in Table 1 but wit error distribution (15): non-i.i.d. skewed. MB PRMB MF PMF Normal Table 6 As in Table 2 but wit error distribution (15): non-i.i.d. skewed. we will generate independent Y data suc tat E(Y x) = sin(x), Var(Y x) = 1/2, and design points x 1,..., x 100 equi-spaced on (0, 2π). However, te error structure ε x = (Y E(Y x))/ Var(Y x) as skewness and/or kurtosis tat depends on x, tereby violating te i.i.d. assumption. For our simulation, we considered ε x = c xz +(1 c x )W c 2 x +(1 c x ) 2 (15) were c x = x/(2π) for x [0, 2π], and Z N(0, 1) independent of W tat will eiter be distributed as 1 2 χ to capture a canging skewness, or as 5 t 5, to capture a canging kurtosis; note tat EW = 0 and EW 2 = 1. Our results are summarized in Tables 5, 6, 7, and 8. Te findings are qualitatively similar to tose in Section 4.1. Te PMF intervals are te undisputed winners ere in terms of coverage accuracy. By contrast,

10 10 Dimitris N. Politis MB PRMB MF PMF Normal Table 7 As in Table 1 but wit error distribution (15): non-i.i.d. kurtotic. MB PRMB MF PMF Normal Table 8 As in Table 2 but wit error distribution (15): non-i.i.d. kurtotic. te NORMAL and te MB bootstrap intervals sow pronounced under-covarage; interestingly, tese are te two metods tat most practitioners use at te moment. References 1. Devroye, L. (1981). Laws of te iterated logaritm for order statistics of uniform spacings, Ann. Probab., vol. 9, no. 6, pp Efron, B. (1979). Bootstrap metods: anoter look at te jackknife, Ann. Statist., 7, Freedman, D.A. (1981). Bootstrapping regression models, Ann. Statist., 9, Hall, P. (1993), On Edgewort expansion and bootstrap confidence bands in nonparametric curve estimation, J. Roy. Statist. Soc., Ser. B, 55, Härdle, W. and Bowman, A.W. (1988). Bootstrapping in nonparametric regression: local adaptive smooting and confidence bands, J. Amer. Statist. Assoc., 83, Härdle, W. and Marron, J.S. (1991). Bootstrap simultaneous error bars for nonparametric regression, Ann. Statist., 19, Li, Q. and Racine, J.S. (2007). Nonparametric Econometrics, Princeton Univ. Press, Princeton NJ. 8. Linton, O.B., Cen, R., Wang, N. and Härdle, W. (1997). An analysis of transformations for additive nonparametric regression, J. Amer. Statist. Assoc., 92, Neumann, M. and Polzel, J. (1998). Simultaneous bootstrap confidence bands in nonparametric regression, J. Nonparam. Statist., 9, Politis, D.N. (2010). Model-free Model-fitting and Predictive Distributions, Discussion Paper, Department of Economics, Univ. of California San Diego. Retrievable from: ttp://escolarsip.org/uc/item/67j6s Politis, D.N. (2013). Model-free Model-fitting and Predictive Distributions, to appear as a Discussion Paper in te journal Test in 2013.

Model-free prediction intervals for regression and autoregression. Dimitris N. Politis University of California, San Diego

Model-free prediction intervals for regression and autoregression. Dimitris N. Politis University of California, San Diego Model-free prediction intervals for regression and autoregression Dimitris N. Politis University of California, San Diego To explain or to predict? Models are indispensable for exploring/utilizing relationships

More information

Bootstrap prediction intervals for Markov processes

Bootstrap prediction intervals for Markov processes arxiv: arxiv:0000.0000 Bootstrap prediction intervals for Markov processes Li Pan and Dimitris N. Politis Li Pan Department of Matematics University of California San Diego La Jolla, CA 92093-0112, USA

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

Bandwidth Selection in Nonparametric Kernel Testing

Bandwidth Selection in Nonparametric Kernel Testing Te University of Adelaide Scool of Economics Researc Paper No. 2009-0 January 2009 Bandwidt Selection in Nonparametric ernel Testing Jiti Gao and Irene Gijbels Bandwidt Selection in Nonparametric ernel

More information

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,

More information

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

Boosting Kernel Density Estimates: a Bias Reduction. Technique?

Boosting Kernel Density Estimates: a Bias Reduction. Technique? Boosting Kernel Density Estimates: a Bias Reduction Tecnique? Marco Di Marzio Dipartimento di Metodi Quantitativi e Teoria Economica, Università di Cieti-Pescara, Viale Pindaro 42, 65127 Pescara, Italy

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

Financial Econometrics Prof. Massimo Guidolin

Financial Econometrics Prof. Massimo Guidolin CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

Local Orthogonal Polynomial Expansion (LOrPE) for Density Estimation

Local Orthogonal Polynomial Expansion (LOrPE) for Density Estimation Local Ortogonal Polynomial Expansion (LOrPE) for Density Estimation Alex Trindade Dept. of Matematics & Statistics, Texas Tec University Igor Volobouev, Texas Tec University (Pysics Dept.) D.P. Amali Dassanayake,

More information

Applications of the van Trees inequality to non-parametric estimation.

Applications of the van Trees inequality to non-parametric estimation. Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

Fast Exact Univariate Kernel Density Estimation

Fast Exact Univariate Kernel Density Estimation Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

A New Diagnostic Test for Cross Section Independence in Nonparametric Panel Data Model

A New Diagnostic Test for Cross Section Independence in Nonparametric Panel Data Model e University of Adelaide Scool of Economics Researc Paper No. 2009-6 October 2009 A New Diagnostic est for Cross Section Independence in Nonparametric Panel Data Model Jia Cen, Jiti Gao and Degui Li e

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys American Journal of Teoretical and Applied Statistics 2018; 7(3): 92-101 ttp://www.sciencepublisinggroup.com/j/ajtas doi: 10.11648/j.ajtas.20180703.11 ISSN: 2326-8999 (Print); ISSN: 2326-9006 (Online)

More information

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE U N I V E R S I T Ä T H A M B U R G A note on residual-based empirical likeliood kernel density estimation Birte Musal and Natalie Neumeyer Preprint No. 2010-05 May 2010 DEPARTMENT MATHEMATIK SCHWERPUNKT

More information

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation Peiua Qiu Scool of Statistics University of Minnesota 313 Ford Hall 224 Curc St SE Minneapolis, MN 55455 Abstract

More information

Efficient algorithms for for clone items detection

Efficient algorithms for for clone items detection Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire

More information

REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions

REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions arxiv: arxiv:0000.0000 REJOINDER Bootstrap prediction intervals for linear, nonlinear and nonparametric autoregressions Li Pan and Dimitris N. Politis Li Pan Department of Mathematics University of California

More information

Sin, Cos and All That

Sin, Cos and All That Sin, Cos and All Tat James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 9, 2017 Outline Sin, Cos and all tat! A New Power Rule Derivatives

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Continuity and Differentiability of the Trigonometric Functions

Continuity and Differentiability of the Trigonometric Functions [Te basis for te following work will be te definition of te trigonometric functions as ratios of te sides of a triangle inscribed in a circle; in particular, te sine of an angle will be defined to be te

More information

Deconvolution problems in density estimation

Deconvolution problems in density estimation Deconvolution problems in density estimation Dissertation zur Erlangung des Doktorgrades Dr. rer. nat. der Fakultät für Matematik und Wirtscaftswissenscaften der Universität Ulm vorgelegt von Cristian

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 By Jiti Gao 2 and Maxwell King 3 Abstract We propose a simultaneous model specification procedure for te conditional

More information

Differential Calculus (The basics) Prepared by Mr. C. Hull

Differential Calculus (The basics) Prepared by Mr. C. Hull Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit

More information

5.1 We will begin this section with the definition of a rational expression. We

5.1 We will begin this section with the definition of a rational expression. We Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go

More information

Uniform Convergence Rates for Nonparametric Estimation

Uniform Convergence Rates for Nonparametric Estimation Uniform Convergence Rates for Nonparametric Estimation Bruce E. Hansen University of Wisconsin www.ssc.wisc.edu/~bansen October 2004 Preliminary and Incomplete Abstract Tis paper presents a set of rate

More information

158 Calculus and Structures

158 Calculus and Structures 58 Calculus and Structures CHAPTER PROPERTIES OF DERIVATIVES AND DIFFERENTIATION BY THE EASY WAY. Calculus and Structures 59 Copyrigt Capter PROPERTIES OF DERIVATIVES. INTRODUCTION In te last capter you

More information

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds.

Flavius Guiaş. X(t + h) = X(t) + F (X(s)) ds. Numerical solvers for large systems of ordinary differential equations based on te stocastic direct simulation metod improved by te and Runge Kutta principles Flavius Guiaş Abstract We present a numerical

More information

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u

More information

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES Ronald Ainswort Hart Scientific, American Fork UT, USA ABSTRACT Reports of calibration typically provide total combined uncertainties

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

Analytic Functions. Differentiable Functions of a Complex Variable

Analytic Functions. Differentiable Functions of a Complex Variable Analytic Functions Differentiable Functions of a Complex Variable In tis capter, we sall generalize te ideas for polynomials power series of a complex variable we developed in te previous capter to general

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

Nonparametric regression on functional data: inference and practical aspects

Nonparametric regression on functional data: inference and practical aspects arxiv:mat/0603084v1 [mat.st] 3 Mar 2006 Nonparametric regression on functional data: inference and practical aspects Frédéric Ferraty, André Mas, and Pilippe Vieu August 17, 2016 Abstract We consider te

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Solving Continuous Linear Least-Squares Problems by Iterated Projection

Solving Continuous Linear Least-Squares Problems by Iterated Projection Solving Continuous Linear Least-Squares Problems by Iterated Projection by Ral Juengling Department o Computer Science, Portland State University PO Box 75 Portland, OR 977 USA Email: juenglin@cs.pdx.edu

More information

The Complexity of Computing the MCD-Estimator

The Complexity of Computing the MCD-Estimator Te Complexity of Computing te MCD-Estimator Torsten Bernolt Lerstul Informatik 2 Universität Dortmund, Germany torstenbernolt@uni-dortmundde Paul Fiscer IMM, Danisc Tecnical University Kongens Lyngby,

More information

Math 312 Lecture Notes Modeling

Math 312 Lecture Notes Modeling Mat 3 Lecture Notes Modeling Warren Weckesser Department of Matematics Colgate University 5 7 January 006 Classifying Matematical Models An Example We consider te following scenario. During a storm, a

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

Continuous Stochastic Processes

Continuous Stochastic Processes Continuous Stocastic Processes Te term stocastic is often applied to penomena tat vary in time, wile te word random is reserved for penomena tat vary in space. Apart from tis distinction, te modelling

More information

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING

UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Statistica Sinica 15(2005), 73-98 UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Peter Hall 1 and Kee-Hoon Kang 1,2 1 Australian National University and 2 Hankuk University of Foreign Studies Abstract:

More information

A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation

A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation A Locally Adaptive Transformation Metod of Boundary Correction in Kernel Density Estimation R.J. Karunamuni a and T. Alberts b a Department of Matematical and Statistical Sciences University of Alberta,

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

Kernel Density Based Linear Regression Estimate

Kernel Density Based Linear Regression Estimate Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency

More information

Handling Missing Data on Asymmetric Distribution

Handling Missing Data on Asymmetric Distribution International Matematical Forum, Vol. 8, 03, no. 4, 53-65 Handling Missing Data on Asymmetric Distribution Amad M. H. Al-Kazale Department of Matematics, Faculty of Science Al-albayt University, Al-Mafraq-Jordan

More information

Block Bootstrap Prediction Intervals for Autoregression

Block Bootstrap Prediction Intervals for Autoregression Department of Economics Working Paper Block Bootstrap Prediction Intervals for Autoregression Jing Li Miami University 2013 Working Paper # - 2013-02 Block Bootstrap Prediction Intervals for Autoregression

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

Kernel estimates of nonparametric functional autoregression models and their bootstrap approximation

Kernel estimates of nonparametric functional autoregression models and their bootstrap approximation Electronic Journal of Statistics Vol. (217) ISSN: 1935-7524 Kernel estimates of nonparametric functional autoregression models and teir bootstrap approximation Tingyi Zu and Dimitris N. Politis Department

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Matematics 94 (6) 75 96 Contents lists available at ScienceDirect Journal of Computational and Applied Matematics journal omepage: www.elsevier.com/locate/cam Smootness-Increasing

More information

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0 3.4: Partial Derivatives Definition Mat 22-Lecture 9 For a single-variable function z = f(x), te derivative is f (x) = lim 0 f(x+) f(x). For a function z = f(x, y) of two variables, to define te derivatives,

More information

MATH1151 Calculus Test S1 v2a

MATH1151 Calculus Test S1 v2a MATH5 Calculus Test 8 S va January 8, 5 Tese solutions were written and typed up by Brendan Trin Please be etical wit tis resource It is for te use of MatSOC members, so do not repost it on oter forums

More information

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016

MAT244 - Ordinary Di erential Equations - Summer 2016 Assignment 2 Due: July 20, 2016 MAT244 - Ordinary Di erential Equations - Summer 206 Assignment 2 Due: July 20, 206 Full Name: Student #: Last First Indicate wic Tutorial Section you attend by filling in te appropriate circle: Tut 0

More information

Kernel Smoothing and Tolerance Intervals for Hierarchical Data

Kernel Smoothing and Tolerance Intervals for Hierarchical Data Clemson University TigerPrints All Dissertations Dissertations 12-2016 Kernel Smooting and Tolerance Intervals for Hierarcical Data Cristoper Wilson Clemson University, cwilso6@clemson.edu Follow tis and

More information

REVIEW LAB ANSWER KEY

REVIEW LAB ANSWER KEY REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

A Reconsideration of Matter Waves

A Reconsideration of Matter Waves A Reconsideration of Matter Waves by Roger Ellman Abstract Matter waves were discovered in te early 20t century from teir wavelengt, predicted by DeBroglie, Planck's constant divided by te particle's momentum,

More information

University Mathematics 2

University Mathematics 2 University Matematics 2 1 Differentiability In tis section, we discuss te differentiability of functions. Definition 1.1 Differentiable function). Let f) be a function. We say tat f is differentiable at

More information

Chapter 2 Ising Model for Ferromagnetism

Chapter 2 Ising Model for Ferromagnetism Capter Ising Model for Ferromagnetism Abstract Tis capter presents te Ising model for ferromagnetism, wic is a standard simple model of a pase transition. Using te approximation of mean-field teory, te

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

Data-Based Optimal Bandwidth for Kernel Density Estimation of Statistical Samples

Data-Based Optimal Bandwidth for Kernel Density Estimation of Statistical Samples Commun. Teor. Pys. 70 (208) 728 734 Vol. 70 No. 6 December 208 Data-Based Optimal Bandwidt for Kernel Density Estimation of Statistical Samples Zen-Wei Li ( 李振伟 ) 2 and Ping He ( 何平 ) 3 Center for Teoretical

More information

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser

More information

Online Learning: Bandit Setting

Online Learning: Bandit Setting Online Learning: Bandit Setting Daniel asabi Summer 04 Last Update: October 0, 06 Introduction [TODO Bandits. Stocastic setting Suppose tere exists unknown distributions ν,..., ν, suc tat te loss at eac

More information

New families of estimators and test statistics in log-linear models

New families of estimators and test statistics in log-linear models Journal of Multivariate Analysis 99 008 1590 1609 www.elsevier.com/locate/jmva ew families of estimators and test statistics in log-linear models irian Martín a,, Leandro Pardo b a Department of Statistics

More information

Section 2.1 The Definition of the Derivative. We are interested in finding the slope of the tangent line at a specific point.

Section 2.1 The Definition of the Derivative. We are interested in finding the slope of the tangent line at a specific point. Popper 6: Review of skills: Find tis difference quotient. f ( x ) f ( x) if f ( x) x Answer coices given in audio on te video. Section.1 Te Definition of te Derivative We are interested in finding te slope

More information

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY APPLICATIONES MATHEMATICAE 36, (29), pp. 2 Zbigniew Ciesielski (Sopot) Ryszard Zieliński (Warszawa) POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY Abstract. Dvoretzky

More information

MATH745 Fall MATH745 Fall

MATH745 Fall MATH745 Fall MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext

More information

How to combine M-estimators to estimate. quantiles and a score function.

How to combine M-estimators to estimate. quantiles and a score function. How to combine M-estimators to estimate quantiles and a score function. Andrzej S. Kozek Department of Statistics, C5C, Macquarie University Sydney, NSW 2109, Australia October 29, 2004 Abstract. In Kozek

More information

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as

More information

EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING

EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING Statistica Sinica 13(2003), 641-653 EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING J. K. Kim and R. R. Sitter Hankuk University of Foreign Studies and Simon Fraser University Abstract:

More information

ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU. A. Fundamental identities Throughout this section, a and b denotes arbitrary real numbers.

ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU. A. Fundamental identities Throughout this section, a and b denotes arbitrary real numbers. ALGEBRA AND TRIGONOMETRY REVIEW by Dr TEBOU, FIU A. Fundamental identities Trougout tis section, a and b denotes arbitrary real numbers. i) Square of a sum: (a+b) =a +ab+b ii) Square of a difference: (a-b)

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

Topics in Generalized Differentiation

Topics in Generalized Differentiation Topics in Generalized Differentiation J. Marsall As Abstract Te course will be built around tree topics: ) Prove te almost everywere equivalence of te L p n-t symmetric quantum derivative and te L p Peano

More information

Boosting local quasi-likelihood estimators

Boosting local quasi-likelihood estimators Ann Inst Stat Mat (00) 6:5 48 DOI 0.007/s046-008-07-5 Boosting local quasi-likeliood estimators Masao Ueki Kaoru Fueda Received: Marc 007 / Revised: 8 February 008 / Publised online: 5 April 008 Te Institute

More information

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016.

Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals. Gary D. Simpson. rev 01 Aug 08, 2016. Quaternion Dynamics, Part 1 Functions, Derivatives, and Integrals Gary D. Simpson gsim1887@aol.com rev 1 Aug 8, 216 Summary Definitions are presented for "quaternion functions" of a quaternion. Polynomial

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

Pre-Calculus Review Preemptive Strike

Pre-Calculus Review Preemptive Strike Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly

More information

Monoidal Structures on Higher Categories

Monoidal Structures on Higher Categories Monoidal Structures on Higer Categories Paul Ziegler Monoidal Structures on Simplicial Categories Let C be a simplicial category, tat is a category enriced over simplicial sets. Suc categories are a model

More information

How to Combine M-estimators to Estimate Quantiles and a Score Function

How to Combine M-estimators to Estimate Quantiles and a Score Function Sankyā : Te Indian Journal of Statistics Special Issue on Quantile Regression and Related Metods 5, Volume 67, Part, pp 77-94 c 5, Indian Statistical Institute How to Combine M-estimators to Estimate Quantiles

More information