This is a repository copy of Generalized Nonparametric Smoothing with Mixed Discrete and Continuous Data.

Size: px
Start display at page:

Download "This is a repository copy of Generalized Nonparametric Smoothing with Mixed Discrete and Continuous Data."

Transcription

1 Tis is a repository copy of Generalized Nonparametric Smooting wit Mixed Discrete and Continuous Data. Wite Rose Researc Online URL for tis paper: ttp://eprints.witerose.ac.uk/3/ Version: Accepted Version Article: Li, Degui orcid.org/--8-38x, Simar, Leopold and Zelenyuk, Valentin ( Generalized Nonparametric Smooting wit Mixed Discrete and Continuous Data. Computational Statistics & Data Analysis. pp. -. ISSN ttps://doi.org/./j.csda...3 Reuse Tis article is distributed under te terms of te Creative Commons Attribution-NonCommercial-NoDerivs (CC BY-NC-ND licence. Tis licence only allows you to download tis work and sare it wit oters as long as you credit te autors, but you can t cange te article in any way or use it commercially. More information and te full terms of te licence ere: ttps://creativecommons.org/licenses/ Takedown If you consider content in Wite Rose Researc Online to be in breac of UK law, please notify us by ing eprints@witerose.ac.uk including te URL of te record and te reason for te witdrawal request. eprints@witerose.ac.uk ttps://eprints.witerose.ac.uk/

2 Computational Statistics and Data Analysis ( 3 Computational Statistics and Data Analysis Generalized nonparametric smooting wit mixed discrete and continuous data Degui Li a, Léopold Simar b, Valentin Zelenyuk c a Department of Matematics, University of York, Te United Kingdom b Institut de Statistique, Biostatistique et Sciences Actuarielles, Université Catolique de Louvain, Belgium c Scool of Economics and Centre for Efficiency and Productivity Analysis, University of Queensland, Australia Abstract Te nonparametric smooting tecnique wit mixed discrete and continuous regressors is considered. It is generally admitted tat it is better to smoot te discrete variables, wic is similar to te smooting tecnique for continuous regressors but using discrete kernels. However, suc an approac migt lead to a potential problem wic is linked to te bandwidt selection for te continuous regressors due to te presence of te discrete regressors. Troug te numerical study, it is found tat in many cases, te performance of te resulting nonparametric regression estimates may deteriorate if te discrete variables are smooted in te way previously addressed, and tat a fully separate estimation witout any smooting of te discrete variables may provide significantly better results. As a solution, it is suggested a simple generalization of te nonparametric smooting tecnique wit bot discrete and continuous data to address tis problem and to provide estimates wit more robust performance. Te asymptotic teory for te new nonparametric smooting metod is developed and te finite sample beavior of te proposed generalized approac is studied troug extensive Monte-Carlo experiments as well an empirical illustration. Keywords: Discrete regressors, Nonparametric regression, Kernel smooting, Cross-validation, Local linear smooting. Introduction Rapid advance of computing power and wide availability of large data sets encourage many researcers to substantially increase teir attention to various nonparametric metods for estimating nonlinear regression relationsip. One of te most popular nonparametric metods appears to be te local polynomial least squares metod, considered by Stone (977, Cleveland (979, Cleveland and Delvin (988, Fan (99, 993, Fan and Gijbels (99, Ruppert and Wand (99 and popularized by Fan and Gijbels (99. Tis metod as received even greater appeal wen it is substantially empowered by te seminal work of Racine and Li (, wic suggests a neat way to deal wit discrete regressors in te context of nonparametric regression. Racine and Li s work as inspired many interesting applications in a wide range of areas, for example, by Stengos and Zacarias (, Maasoumi et al. (7, Parmeter et al. (7, Eren and Henderson (8, Walls (9, Hartarska et al. (, Henderson (, to mention just a few. In tese works were Racine and Li ( s approac is used, researcers are able to obtain new insigts wit muc more confidence, as teir approac is free from imposing any parametric form on te regression relationsip, wile using bot continuous and discrete regressors witout splitting te sample into sub-samples for eac value of Corresponding autor. address: v.zelenyuk@uq.edu.au.

3 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 te discrete variables. In most of te empirical as well as teoretical studies and software codes using te Racine and Li ( s approac tat we are aware of, te way tis approac is applied is in te default or simple form tat we will describe below. Indeed, it became somewat common to smoot te discrete regressors in local polynomial least squares almost automatically, perceiving tat one sould obtain better results tan if te nonparametric estimator is applied to eac group separately (see Li and Racine, 7; and te references terein. Wile tis perception appears to old in various cases and is supported by te simulations conducted by Racine and Li ( for te case of local constant fitting, in tis article we illustrate tat in some cases it may not be te case and smooting te discrete variables can actually deteriorate substantially te resulting estimator of te regression function. In some situations, even, a fully separate estimation for eac group identified by te discrete (or categorical variable may give more accurate results (e.g., in terms of Mean Squared Error, MSE tan te approac wit smooting over te discrete variable. In tese cases, te reduction in variance, or te efficiency gain due to smooting of te discrete regressors, can be outweiged by a substantial bias introduced due to tis smooting. Tis may appen in bot te small and relatively large samples, and so, for suc cases, it may be preferable to make a fully separate estimation for eac group. We will see below tat te source of te problem comes from te bandwidt structure suggested by te basic or default metod appearing in te existing literature wic we are aware of (see, for example, some references listed above. In tis bandwidt selection procedure, a simple bandwidt sceme is proposed for te continuous variables, in te sense tat te same bandwidt vector is taken across te various subgroups determined by te discrete variables, and ten te bandwidts for te continuous and for te discrete variables are simultaneously determined. Hence, even if te resulting estimator of te bandwidt for a discrete variable takes te value of zero (i.e., no smooting of tis discrete variable wit separate estimation by groups, te resulting bandwidts for te continuous variables are still restricted to be common to te various categories of te associated discrete variables. Tis may lead in some cases to over-smooting in some groups and under-smooting in te oters. To fix te ideas, suppose tat we use te local constant kernel metod (Nadaraya-Watson and tat we ave two groups of observations (determined by one discrete variable and only one continuous variable. Suppose in addition tat in one group, te variable is relevant (te derivative of te regression function wit respect to tis variable is not zero, and in te oter it is not relevant. In te latter group, te optimal bandwidt would converge to infinity as sown by Hall et al. (7, wereas in te first group, it may converge to zero. A fully separate analysis wic indicates non-smooting on te discrete variable would capture tis feature wereas te simple-smooting would miss suc a feature. Te latter approac will provide a common bandwidt for te continuous variable tat will under-smoot te regression in te group were te variable is irrelevant and over-smoot te regression in te group were it is relevant. Tis extreme case seems obvious but apparently it as been overlooked in te existing literature. We can also imagine less extreme situations were te penomenon would be similar: small influence of te continuous variable in one group and a more complex structure in te oter. Wen te local polynomial smooting is used, te same problem would also appear if in one group te local polynomial approximation is not far from te true regression, but te structure in te oter is globally muc more complex. For example, for te local linear case, te optimal bandwidt would converge to infinity if te true regression is linear, or converge to zero if te true regression function is iglynonlinear (Li and Racine,. Obviously, te problem can even be more severe wen estimating te derivative of te nonparametric regression function. We will briefly illustrate tis point in one example below, sowing te consequences wic te simple-smooting metod could lead to. To te best of our knowledge, tis penomenon as never been analyzed in te literature using smooting tecniques for discrete variables. It is one of our objectives to investigate te consequence of suc issue. In tis paper, we introduce a simple way to generalize and improve te commonly used smooting metods and overcome te potential difficulties mentioned above. Specifically, in te bandwidt selection procedure, we allow different bandwidt parameters for te continuous variables in different categories of te discrete variables. We call tis metod te complete-smooting approac in contrast to te simple-smooting approac used in most of te existing literature and to te non-smooting approac were te groups are treated fully separately. Tis smooting approac is, of course, at a cost of somewat iger computational complexity, but we will see later tat te gain in precision of te regression estimate can be substantial. We will limit te presentation to te case of categorical discrete variables were eac value of te discrete variables determines a group. More generally and beyond te extreme cases described above, weter te bias beats te variance or not essentially depends on te degree of difference of curvatures of te regression relationsip pertinent to eac group identified

4 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 3 by te discrete variable and to some extent also depends on oter aspects of te Data Generating Process (DGP suc as te size of te noise. Tus, in general, a priori it is not clear weter it is better to smoot te discrete variables or to do a fully separate estimation (unless te latter is ardly reliable or impossible due to very small data in a given group and, so far, tere appears to be no formal rule of tumb for deciding on tis dilemma. Te complete-smooting approac, tat we suggest below, not only allows for smooting te discrete variables, but also uses different bandwidts for eac group for te continuous variables. Since tis encompasses bot te non-smooting approac and te simple-smooting tecnique as special cases, we can expect more robust performance of te complete-smooting approac, wic will be investigated later in te paper. One of te main messages of our paper is tat wen using te simple-smooting approac, an additional assumption on similar degree of smootness wit respect to continuous variables is implicitly imposed for different categories of te discrete variables, and one sould recognize or acknowledge it explicitly. Moreover, we sow tat suc an additional assumption can distort te estimates substantially. Wile many examples can be used to illustrate te problem, we will deliberately take simple examples (a univariate continuous and univariate discrete variable to avoid te potential curse of dimensionality problem, to vividly illustrate te point. We will also suggest a way to overcome tis problem, wenever it is numerically doable, and point out te numerical difficulties involved tere as well as open questions tat still remain. Te rest of tis paper is organized as follows. Section recalls te basic metodologies of te local linear leastsquares metod; Section 3 introduces te local linear complete-smooting metod and establises te asymptotic teory; Section illustrates ow severe te problem can be troug some extensive Monte-Carlo experiments and ow te proposed complete-smooting approac outperforms te simple-smooting approac; Section 5 illustrates te issue wit a real data set and Section concludes and summarizes our main findings. Some tecnical assumptions and proofs of te teoretical results are given in Appendix A C.. Local linear simple-smooting Te point we aim to stress in tis paper is a general penomenon linked to nonparametric kernel-based regression, but we illustrate it troug a metod tat appears to be te most popular in practice: te local linear least squares (LLLS wic is a particular case of local polynomial least squares (LPLS. Our remarks and suggestions could obviously be applied to oter nonparametric kernel-based estimation metods suc as te local non-linear least squares, and local linear quasi-likeliood metods (see, for example, Gozalo and Linton, ; Frölic, ; Park et al.,. We summarize ere te basic idea of te LLLS metod, and can refer to Fan and Gijbels (99, Pagan and Ulla (999, and Li and Racine (7 for details. Tis metod allows flexible form troug approximating locally te true unknown regression. Assume tat te dependent variables Y i R,,...,n, are generated by te following regression model: Y i = m(z i +ε i, ( were Z i = (Z c i, Zd i wit Zc i R p being continuous and Z d i being a q-dimensional discrete vector. We next focus on te presentation for categorical unordered variables, but te same could be done for naturally ordered variables wit sligt modifications, see Racine and Li ( for details. In addition, we make te standard assumptions on te errors,ε i, tat tey are independent random variables wit E(ε i Z i = and Var(ε i Z i < a.s., altoug te metodology we will discuss may also apply to more sopisticated setups. Te flexibility of te model is related to te fact tat te unknown regression function m( is not specified. No particular parametric assumptions are made on m( itself except for some smootness properties on m(, z d. For te sake of simplicity, we assume tat m(, z d is twice continuously differentiable wit respect to its first p continuous arguments. Finally, we need also some regularity condition on te smootness of te conditional density f (z c z d wit respect to te p continuous arguments. Te main idea of LPLS is to approximate m(u, v for all (u, v in a neigborood of a given point z=(z c, z d by a local polynomial function of degree r in te direction of z c and ten obtain te LPLS estimate by minimizing te resulting weigted sum of te squared errors. A degree r = would result in te local constant estimator. As 3

5 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 mentioned above, we will limit our presentation to te case of local linear approximation (r=. Extension to iger orders follows te same ideas but at a cost of more notational complexity. Consider te following local approximation: m(u, v α z c,z d+βτ z c,z d (u z c, ( wereα z c,z d R andβ z c,z d Rp are quantities to be estimated tat, in general, vary wit (z c, z d. To take only neigboring observations around (z c, z d, or to give more weigts to tem, wen evaluating te least-squares criterion, te kernel approac is used. For te continuous variables we use a product kernel (but oter multivariate kernels may also work, i.e., K (Z c i z c = p j= j K j ( Z c i j z c j j, (3 were =(,..., p is a vector of bandwidts, K j ( is a standard univariate kernel function suc as te univariate standard Gaussian density, z c j is te jt component of z c and Zi c = (Zi c,,zc ip τ. For te discrete vectors, we use te discrete kernel introduced by Racine and Li (, i.e., Λ λ (Z d i, zd = q l= λ I(Zd il zd l l, ( were I(A is te indicator function, wit I(A= if A olds, and oterwise,λ l [, ] are bandwidts for te discrete variablesl=,...,q, and Zi d = (Z d i,,zd iq τ. Te LLLS or weigted least squares criterion at a given point (z c, z d measuring te quality of te approximation is tus given by C n (α z c,z d,β z c,z d; zc, z d = [ ] K Y i α z c,z d βτ (Z c z c,z i z c (Zi c z c Λ d λ (Zi d, zd. (5 We note tat if for a particularl, we aveλ l = (wit te convention tat =, ten tere is no smooting of tisl t discrete variable; i.e., te evaluation in (5 is done separately for eac subsample determined by tis discrete variable, wit a common. At te oter limit, ifλ l =, we do not take into account te discrete variable Z d il in te analysis, i.e., all te sample points ave weigt in (5 wic is independent from te value of Z d il. Let α z c,z d and β z c,z d minimize te criterion C n(, ; z wit z=(z c, z d, ten te proposed estimated value of te regression function at te point (z c, z d, denoted by m(z c, z d, is given by α z c,z d wereas β z c,zd gives te estimated value of te first partial derivative of m( wit respect to te continuous variables z c evaluated at (z c, z d. As a common bandwidt is used to smoot over te continuous variables for different subgroups, we call suc a metod local linear simple-smooting. Te bandwidt selection is a very important issue in nonparametric kernel-based estimation. Te selection of appropriate bandwidts (, λ in (5 can be done by te cross-validation (CV metod, altoug many oter approaces can be adopted suc as te corrected AIC metod. Wen adopting te CV approac, te values ĥ and ˆλ are cosen to minimize [ CV(,λ= Yi m ( i (Zi c, Zd i,λ ] M(Z c i, Zi d, ( were M(Zi c, Zd i is a weigt function trimming out boundary observations and m ( i(zi c, Zd i,λ is te leave-oneout kernel estimator of m(zi c, Zd i wit bandwidts andλ, i.e., estimated by minimizing (5, but leaving te it observation out of te sample. Te properties of te resulting estimator m(z c, z d by using te CV bandwidt selection metod ave been described in te seminal paper by Racine and Li ( for te local constant case (r=, and in Li and Racine ( for te local linear case. It is important to notice tat, as is common in te nonparametric smooting approaces, tese results assume tat j for all j=,..., p, and n... p as n. Te argument generally admitted in te existing literature is tat it is better to smoot te discrete variable in (5, because te separate analysis on eac separate subsample defined by te categorial variables Z d would correspond to te particular case wen allλ l =,l=,...,q. However, we indicate in te next section tat it migt not be te case in some situations.

6 3. Local linear complete-smooting Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 5 To simplify te argument, let us suppose tat we ave a univariate discrete variable defining two groups of observations (te argument would be te same wen considering all te subgroups defined by te multivariate categorical vector Z d. Te DGP in te two groups may ave different caracteristics (sape of te regression function, curvature, or size of te noise. Extreme cases of suc differences ave been briefly described in te introductory section. Unless te sample size witin one of te groups is very small, te CV bandwidt selection procedure described above would provide a small value of te tuning parameter λ for a relevant discrete regressor, resulting in te frequency metod in te limiting case wereλ=, and a common for different groups. Furtermore, we suppose tat te continuous variable Z c is also univariate. Extension to te case of multivariate continuous and discrete variables is straigtforward. However, suc extension may not be so useful because of te curse of dimensionality problem. As pointed out above for te extreme cases, te simple-smooting based CV metod migt be inappropriate in many situations were some important caracteristics of te DGP are quite different between te two subgroups. In suc instances, aving a common bandwidt for different groups may lead to under-smooting (over continuous variables for one group wile over-smooting for te oter. Wile tis may not matter asymptotically as long as te common bandwidt as te proper order, it appens to matter in te finite sample case (small and even relatively large ones, sometimes substantially, as illustrated in our examples in te next section. To address tis issue, it migt be better to do fully separate estimation witin eac subgroup, allowing te bandwidt over continuous variable to vary across different groups witout smooting for te discrete variable. Altoug, as pointed out by Li and Racine (, tis may increase te variance, te bias could be lowered substantially, wic would lead to te estimation wit smaller MSE tan te simple-smooting metod. We will sow in te next section, troug te Monte-Carlo simulations, tat te loss in accuracy of te default simple-smooting estimation metod may be dramatic. However, it is well-known tat if te sample size in one group is too small, one cannot ope to get sensible results wit te separate local linear estimation. We next give a solution to address tis problem. A natural way is to allow different bandwidts for te continuous variable in te different subgroups wic address te problem of over- or under-smooting in subgroups, and allow smooting over te discrete variable, as in Racine and Li (, to circumvent te problem tat te number of observations in one subgroup migt be too small. We call suc metod te complete-smooting metod. Formally, in te case of two subgroups defined by one discrete variable, te equation (5 defining te estimator could be replaced by C n (α z c,z d,β z c,z d; zc, z d = [ ] Λλ Y i α z c,z d βτ (Z c z c,z i z c (Z d d i, zd [K ( (Z ci z c I { Z di = z d ( } + K ( (Z ci z c I { Z di = z d ( } ], (7 were z d (k, k=,, are te two possible values of z d. For simplicity, we use te same kernel function in te two subgroups, but allow potentially different bandwidts ( and ( for tese subgroups. Let m(z c, z d be te local linear estimated value of m(z c, z d wit complete-smooting wic minimizes C n (α z c,z d,β z c,z d; zc, z d wit respect to α z c,z d andβ z c,zd. It is clear tat te general formulation we propose in (7 encompasses bot te fully separate analysis by groups (λ=, and te simple-smooting approac ((=(. Letµ j = u j K(udu andν j = u j K (udu. Defineσ m(z c, z d = ν σ (z c,z d f (z c z d P(Z d =z d and b m, (z c = µ m (z c, z d ( (+λ ( p f (z c z d ([ m(z c, z d ( m(z c, z d ( ], p f (z c z d ( b m, (z c = µ m (z c, z d ( p f (z c z d ( [ (+λ m(z c, z d ( m(z c, z d ( ], ( p f (z c z d ( wereσ (, and f ( are defined in Assumption 3 in Appendix A and < p = P(Z d = z d (<. We next give te asymptotic distribution teory for te local linear complete-smooting estimator m(z c, z d. Teorem 3.. Suppose tat Assumptions in Appendix A are satisfied. If z d = z d (, we ave [ ] n( m(z c, z d ( m(z c, z d ( b m, (z c d N [,σ m(z c, z d ( ]. (8 5

7 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 If z d = z d (, we ave [ ] n( m(z c, z d ( m(z c, z d ( b m, (z c d N [,σ m(z c, z d ( ]. (9 We next derive te optimal bandwidts for te local linear complete-smooting estimator. Let m ( i (Zi c, Zd i (, (,λ be te leave-one-out local linear complete-smooting estimated value of m(zi c, Zd i wit tuning parameters (, ( andλ. Ten, similarly to te CV metod introduced in Section, we define CV((, (, λ = [ Yi m ( i (Zi c, Zd i (, (,λ ] M(Z c i, Zi d. ( Te optimal bandwidts [ĥ(, ĥ(, ˆλ] are te values wic minimize CV((, (,λ. Define { ψ ((,λ = p b (z c, z d ( (+ λ( p f (z c z d ( [ m(z c, z d ( m(z c, z d ( ] p f (z c z d ( M(z c, z d ( f (z c z d (dz c, { ψ ((,λ = ( p b (z c, z d ( (+ λp f (z c z d ( [ m(z c, z d ( m(z c, z d ( ] ( p f (z c z d ( M(z c, z d ( f (z c z d (dz c, } } were b (z c, z d = µ m (z c, z d. Define χ(( = χ(( = ν ( ν ( σ (z c, z d (M(z c, z d (dz c, σ (z c, z d (M(z c, z d (dz c. We next give te asymptotic expansion of CV((, (,λ, wic is critical to derive te asymptotic property of te optimal bandwidts [ĥ(, ĥ(, ˆλ]. Teorem 3.. Suppose tat te conditions in Teorem 3. and Assumption are satisfied. Ten, we ave CV((, (,λ=cv +Φ((, (,λ+ s.o., ( were CV := n ε i M(Zc i, Zd i is independent of te tuning parameters, Φ((, (,λ=n [ ψ ((,λ+ψ ((,λ ] +χ((+χ((, and s.o. represents some terms wit smaller asymptotic order. Define MSE((, (, λ = In te proof of Teorem 3. in Appendix B, we sow tat [ m(z c i, Zi d m ( i(zi c, Zd i (, (,λ ] M(Z c i, Zi d. ( MSE((, (,λ=φ((, (,λ+ s.o. (3 Letting [ (, (,λ ] be te minimizer to MSE((, (,λ, by Teorem 3. and standard argument suc as te proofs of Teorems 3. and 3. in Li and Racine (, we ave te following corollary.

8 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 7 Corollary 3.. Suppose tat te conditions in Teorem 3. are satisfied. Ten, we ave ĥ( ( ( = o P (, ĥ( ( ( = o P (, ˆλ λ λ = o P (. ( Furtermore, te asymptotic normal distributions in (8 and (9 still old wen [(, (, λ] are replaced by [ĥ(, ĥ(, ˆλ]. We next compare te measurements of te MSEs between te local linear complete-smooting and te local linear simple-smooting. Let [ MSE(,λ= m(z c i, Zi d m ( i(zi c, Zd i,λ ] M(Z c i, Zi d. (5 and [,λ ] be te minimizer to MSE(,λ. Analogously, we can also sow tat Noting tat ĥ = o P (, MSE((, (,λ=mse(,λ+ s.o. for te case of (=(=andλ=o(, we may sow tat min (,(,λ ˆλ λ λ = o P (. ( MSE((, (,λ min MSE(,λ. (7,λ Using (, ( and (7, we can easily prove te following corollary, wic indicates tat in te large sample case, te MSE by using te optimal bandwidts cosen by our metod is smaller tan tat by using te default optimal bandwidts. Corollary 3.. Suppose tat te conditions in Teorem 3. are satisfied. Ten, we ave in probability. MSE(ĥ(, ĥ(, ˆλ MSE(ĥ, ˆλ In te next section, we will investigate te finite sample properties of te different approaces. Tis will furter confirm te expected teoretical performance of our complete-smooting approac over te simple-smooting approac. In particular, we will find tat te gain in precision may be substantial in practice.. Simulation studies In tis section, we present two simple examples tat allow us to vividly illustrate te issue raised in te previous sections. We provide some typical pictures resulting from tese simulated samples generated according to te scenario described below. We may not conclude general statements wit one simulated sample, but te idea is to provide visualization of te problem. Trougout te simulation, we considered te following regression relationsip Y i = a + a Z d i+ b Z c i+ b Z d i Zc i+ b 3 (Z c i + b Z d i sin(πz c i +ε i,,,n. (8 Varying te coice of te parameters in model (8 would lead to various examples explored in tis section. Example.. (linear versus periodic regression Let a =, a =, b =, b =., b 3 =, b =, witε i N(,σ ε,i, wereσ ε,i = Zi d. Here, for eac simulation, Zi c U(, for te continuous variable Z c and te discrete variable Z d was set randomly at if W>.5 and set at if W.5, were W U(,. So, we randomly obtained about 75% of observations for group (wit Zi d = and about 5% for group (wit Zi d =. 7

9 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 8 In tis example, for group (z d = we ave a linear model wit more noisy data and smaller sample size, and for group (z d = we ave a linear model (wit different intercept and slope plus a cyclical component. In Figure, we present typical results of te estimation by using te tree approaces: te fully separate estimation of te two subsamples (i.e., non-smooting over te discrete variable, Approac, te simple-smooting (i.e., smooting te discrete variable wit common bandwidt for te continuous regressor, Approac, and te complete-smooting (i.e., smooting te discrete variable and keeping potentially different bandwidts for te continuous variable, Approac 3. Only two cases of sample size are provided in Figure : n= and n= (similar pictures ave been obtained for oter sizes. From te left panels (n=, we can see tat te simple-smooting (Approac suffers from a serious drawback in tis scenario and tat te non-smooting estimation (Approac gives muc better results bot for n = and n =. Te complete-smooting of Approac 3, encompassing te two preceding ones, does as well as te fully separate analysis for tese samples. Te fully separate estimation substantially outperforms te estimation wit simple-smooting over te discrete regressor, as te latter approac under-smootes for group and sligtly over-smootes for group. Note tat for tis example, te under-smooting is more pronounced because group dominates in te pooled sample by its larger size and so te common bandwidt selected in te CV optimization for (,λ is relatively close to wat is optimal for group in te separate estimation, wile for te group, te true optimal bandwidt must in fact go to infinity to attain te correctly specified parametric model. Te completesmooting (Approac 3 avoids suc problem by allowing different bandwidts for te continuous regressor Z c in te two groups. Te numerical results of te Monte-Carlo experiment summarized in Table furter confirm te above analysis from Figure. Wit n = 5,,,, we conducted 5 Monte-Carlo replications for eac case. Te table provides te mean of te Approximate Mean Squared Error (AMSE for eac sample, averaged over 5 Monte- Carlo replications, i.e., AMSE=(/M M g= AMSE g, were te AMSE for eac replication g (g=,..., M wit M= 5 is defined as AMSE g = [ m(z c n i,g, Zi,g d m(zc i,g, Zd i,g ], were m(, is te estimate of te true regression function m(, by using te above tree approaces, respectively. Table also gives te estimated standard deviation of AMSE defined as M ( std MC = AMSEg AMSE, M(M g= Tis std MC can be used to ceck if te differences observed in te table for te AMSE are significant. To save space, Split, Simpl and Compl in te table represent Approac, Approac and Approac 3, respectively. Note tat all te figures appearing in Table vary as expected wen n increases. It is wort noting tat for all te simulations in tis scenario, te CV metod yields ˆλ tat is very close to zero for bot Approac and Approac 3, and tat Approac 3 gave systematically less weigt in te smooting of te discrete variable tan Approac (in terms of medians over te 5 replications. In Table, we can also see tat te AMSE of Approac is always smaller tan tat of Approac (often about twice as small, and tat tis does not vanis wen te sample size increases. Note tat te difference in AMSE is muc larger for te smaller group, wic as been explained above based on te findings of Figure. Meanwile, from te medians of te bandwidt selected by te different approaces, we see clearly tat Approac under-smootes continuous variable for Group. Furtermore, we can see tat te complete-smooting approac is, as expected and explained above, very robust ere since it gives almost te same results as Approac. Note also tat, as a consequence of te teory provided by Racine and Li ( and Li and Racine (, in all te cases, te AMSE reduces as n increases and tat te optimal bandwidts (except ĥ( wen computed separately go to zero as n goes to infinity. On te oter and, ĥ( and ĥ( cosen by te CV metod using Approac 3 are similar to tose obtained using Approac, and te optimal ˆλ by using Approac 3 is smaller tan tat using Approac, indicating tat Approac suggests more similarities between groups tan Approac 3. Example.. (quadratic versus periodic regression 8

10 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 9 True Function and its Estimates: Splitting True Function and its Estimates: Splitting True, obs. & fitted values of dep. variable bandwidts: =., = 8.83 Obs Obs Est reg Est reg True reg True reg True, obs. & fitted values of dep. variable bandwidts: =.5, = Obs Obs Est reg Est reg True reg True reg True Function and its Estimates: Simple Smooting True Function and its Estimates: Simple Smooting True, obs. & fitted values of dep. variable bandwidts: =.3, λ =.9 Obs Obs Est reg Est reg True reg True reg True, obs. & fitted values of dep. variable bandwidts: =.7, λ =.39 Obs Obs Est reg Est reg True reg True reg True Function and its Estimates: Complete Smooting True Function and its Estimates: Complete Smooting True, obs. & fitted values of dep. variable bandwidts: =., = 3.5, λ =.5 Obs Obs Est reg Est reg True reg True reg True, obs. & fitted values of dep. variable bandwidts: =.5, = 35.5, λ = Obs Obs Est reg Est reg True reg True reg Figure. Example.: Left panel, n= and rigt panel, n=. From top to bottom: Approac (non-smooting for te discrete variable, Approac (local linear simple-smooting, Approac 3 (local linear complete-smooting. We next consider a quadratic versus a periodic regression in te two groups, wic is similar to Example. except tat now te linear model is wrong in bot groups. We selected in equation (8 te values a =, a =, b =, b =., b 3 =.5, b =, wit all te oter assumptions remaining te same as in Example.. Te 9

11 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 simulation results are given in Figure and Table. We find tat in tis case te difference between Approac and te oter two approaces is still present. From Table, we confirm te general comments given for Example.: Approac as te better performance tan Approac, and Approac 3 is te more robust way in tis scenario (wit significantly better performances. In particular, note tat even toug te difference in curvatures between te two groups is not so extreme as in Example., te difference of performances is still substantial: te overall AMSE for Approac is about.5 times larger tan tose for Approaces and 3 wen n=5, and about twice as large for n= and larger samples. Consequences on te estimation of derivatives Te estimation of derivatives also as te penomenon similar to wat we just described. To save space, we only illustrate tis for te case of te scenario described in Example.. Figure 3 displays one typical sample and te resulting estimates of te first partial derivatives using te tree approaces. Tis figure sows tat te estimation of derivatives can be even more severely flawed by using te simple-smooting approac. Indeed, as one can clearly see from Figure 3, wit simple-smooting tecnique, one obtained radically varying estimated curves of te derivatives for te group were teir true values are constant. Te problem sustains weter te total sample size is or (or more. Tis means tat researc conclusions, policy implications and, consequently, te real policy decisions based on suc estimates may be misleading. Note tat for te same example, te complete-smooting approac produced muc better results wic are very close to te true values. Te Monte-Carlo experiment results based on te regression model (8 wit more different coices of parameters can be found in working paper version of Li et al. (3. As tose results are similar, we omit tem from te paper to save space. 5. Empirical application In tis section, we make an illustration of te penomenon discussed above in te context of a real data set from te study by Kumar and Russell (, about patterns of convergence or divergence in economic growt in te World. We cose tis data and te context because te topic of economic growt as remained interesting for a wide audience for centuries. Tis data set consists of observations in 57 countries, containing te variables suc as GDP, labour and capital of eac country in 95 and in 99, and was originally extracted from te Penn World Tables. We will use tis data to estimate regression relationsip between te growt in GDP per capita of countries between 95 and 99 (response variable and te initial levels of GDP per capita of tese countries (continuous explanatory variable. Suc a regression and many of its variations are often performed in empirical economic growt studies on convergence. Te interest of suc studies often lies in tat te growt rates of poorer countries are, on average, iger tan tose of te ricer countries, and tus te poorer countries would eventually catc up wit or converge to te levels of GDP per capita of te ricer countries. Tis is often referred to as te (unconditional beta-convergence penomenon. Earlier works on tis issue employed te parametric regression models. Some studies found tat te slope coefficient (te beta in suc regressions is negative and significantly different from zero, wic supports te betaconvergence ypotesis. However, oter studies found tat te beta is insignificantly different from zero (i.e., no convergence or even positive (i.e., beta-divergence and significantly different from zero for different samples or for distinct groups of countries witin a sample or wen additional explanatory variables are accounted for. 3 We next use te nonparametric regression metod, wic may give some useful insigts. In particular, we apply te LLLS wit te tree approaces discussed above to te following regression relationsip: y i = m(z c i, zd i +ε i,,...,n were y i is growt in GDP per capita of country i between 95 and 99, z c i is te natural log of GDP per capita of country i in 95, wile z d i is a discrete variable wic will be defined later andε i is a stationary noise satisfying E(ε i z c i, zd i =and Var(ε i z c i, zd i < a.s. Tis data set (or its extended version was also used in many oter applications, see, for example, Henderson and Russell (5, Simar and Zelenyuk (, Henderson and Zelenyuk (7 and Badunenko et al. (8. 3 For a recent review of tis topic, see, for example, Maasoumi et al. (7, Weil (8 and references cited terein.

12 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 True Function and its Estimates: Splitting True Function and its Estimates: Splitting bandwidts: =.5, = bandwidts: =.5, = 5. True, obs. & fitted values of dep. variable Obs Obs Est reg Est reg True reg True reg True, obs. & fitted values of dep. variable Obs Obs Est reg Est reg True reg True reg True Function and its Estimates: Simple Smooting True Function and its Estimates: Simple Smooting True, obs. & fitted values of dep. variable bandwidts: =.3, λ =.9 Obs Obs Est reg Est reg True reg True reg True, obs. & fitted values of dep. variable bandwidts: =.7, λ =.39 Obs Obs Est reg Est reg True reg True reg True Function and its Estimates: Complete Smooting True Function and its Estimates: Complete Smooting True, obs. & fitted values of dep. variable bandwidts: =.5, =.3, λ = Obs Obs Est reg Est reg True reg True reg True, obs. & fitted values of dep. variable bandwidts: =., = 8.98, λ = Obs Obs Est reg Est reg True reg True reg Figure. Example.: Left panel, n=, and rigt panel, n=. From top to bottom: Approac (non-smooting for te discrete variable, Approac (local linear simple-smooting, Approac 3 (local linear complete-smooting. Te estimation results are sown in Figure wic also includes some information on te resulting bandwidts cosen by different approaces. In panel (a of Figure, we give te LLLS estimated curve witout te discrete variable wit cosen by te CV metod. From tis figure, one may conjecture tat tere migt be different groups

13 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 8 True and Estimated st Partial Derivative of Y: Splitting bandwidts: =., = True and Estimated st Partial Derivative of Y: Splitting bandwidts: =.5, = True & fitted values of derivatives 8 Est. for reg Est. for reg True Y / Z True Y / Z True & fitted values of derivatives 8 Est. for reg Est. for reg True Y / Z True Y / Z True and Estimated st Partial Derivative of Y: Simple Smoot True and Estimated st Partial Derivative of Y: Simple Smoot 8 bandwidts: =.3, λ =.9 8 bandwidts: =.7, λ =.39 True & fitted values of derivatives 8 Est. for reg Est. for reg True Y / Z True Y / Z True and Estimated st Partial Derivative of Y: Complete Smoot 8 bandwidts: =., = 3.5, λ =.5 True & fitted values of derivatives 8 Est. for reg Est. for reg True Y / Z True Y / Z True and Estimated st Partial Derivative of Y: Complete Smoot 8 bandwidts: =.5, = 35.5, λ = True & fitted values of derivatives 8 Est. for reg Est. for reg True Y / Z True Y / Z True & fitted values of derivatives 8 Est. for reg Est. for reg True Y / Z True Y / Z Figure 3. Derivative Estimates for Example.: Left panel, n= and rigt panel, n=. From top to bottom: Approac (non-smooting for te discrete variable, Approac (local linear simple- smooting, Approac 3 (local linear complete-smooting. witin te sample wic ave different regression relationsips (in terms of intercept or slope or bot. Indeed, in various existing studies, researcers often distinguis various groups of countries, allowing tem to ave different regression relationsips. An objective grouping criterion often used in practice, for example, is an indicator weter

14 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 3 Observed data & fitted values of dep. variable 5 3 bandwidts: =. Estimation witout discrete variable Obs. Gr Obs. Gr Est reg Observed data & fitted values of dep. variable 5 3 bandwidts: =., =.5 Splitting Obs. Gr Obs. Gr Est reg Est reg Observed data & fitted values of dep. variable 5 3 bandwidts: =., λ =. Simple Smooting Obs. Gr Obs. Gr Est reg Est reg Observed data & fitted values of dep. variable 5 3 Complete Smooting bandwidts: =.93, =.3, λ =.5 Obs. Gr Obs. Gr Est reg Est reg Figure. Illustration wit GDP data. From left to rigt and top to bottom, Panel (a: Approac (witout te discrete variable, Panel (b: Approac (non-smooting for te discrete variable, Panel (c: Approac (local linear simple-smooting, Panel (d: Approac 3 (local linear complete-smooting. a country is an OECD member or not (e.g., Racine et al., ; Simar and Zelenyuk, ; Maasoumi et al., 7; Henderson and Zelenyuk, 7, and so we also use tis as our discrete variable, z d i, wic as te value if country i was a member of te OECD in te year 95 and zero oterwise. In panel (b of Figure, we give te LLLS estimated curves by using Approac (i.e., separate estimation for eac group wit cosen via CV for eac group separately, and one can see tat te estimated relationsips for te two groups are very different not only in te intercept but also in te slope. Specifically, note tat te relationsip for te larger non-oecd group is virtually flat, wit very sligt inverted-u-sape curvature. On te oter and, note tat te relationsip for te smaller OECD group as a more pronounced inverted-u-sape curvature (or rater inverted ockey-stick sape. Note tat suc curvature may suggest an important economic implication. It ints tat te OECD countries wit very low initial GDP per capita are expected to ave iger growt rates in GDP per capita tan tose wit very ig initial GDP per capita, yet te igest rates are expected to be not at te lowest level of GDP per capita but somewat larger. In panel (c of Figure, we present te LLLS estimated curves by using Approac (i.e., local linear simplesmooting wit andλselected jointly via te CV metod for te entire sample. One can see tat te estimated In a detailed analysis, one may want to condition for many oter potentially important explanatory variables, yet we will limit our illustration to te case of one continuous and one discrete explanatory variable for te sake of ease of grapical representation of te penomenon. 3

15 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 relationsips are also very different between te two groups. Te relationsip for te non-oecd group as a sligtly more pronounced inverted-u-sape curvature, altoug it still remains relatively flat. On te oter and, te relationsip for te OECD group as muc less curvature tan tat in panel (b, and it is not an inverted-u-sape at all. Hence, Approac suggests tat for te OECD countries tere is an almost linear and negative relationsip between te growt in GDP per capita and te initial level of GDP per capita. In oter words, wit te local linear simplesmooting approac, we get some under-smooting for te larger group and over-smooting for te smaller group compared wit panel (b by te separate estimation approac. Tis is similar to wat we ave observed from te simulated examples. Some additional insigt is provided by te local linear complete-smooting metod (Approac 3, were we allow for eac group identified by te discrete variable to ave its own bandwidt but also smoot te discrete variable and so use te full sample in one estimation. Panel (d of Figure visualizes te estimated curves by using Approac 3 and one can see tat it gives almost identical results to tose by using Approac, wic is similar to wat we ave observed from te simulation studies. In tis small data set, we can also find tat te left-most observation in group migt be an outlier, and so omitting it wen using Approaces and 3 may produce results very similar to Approac. However, it migt be te case tat tere are oter data points not available in our sample wic are close to tis left-most observation in group and including tem would make te inverted U-sape curvature even more pronounced. Since we do not know te true relationsip, unlike in te simulated examples, it is difficult to judge wic of tese two arguments is likely to be rigt or wrong, wic is beyond te scope of tis paper. Since Approac 3 encompasses te oter two approaces by taking te best features from eac, and tat our simulations suggested tat Approac 3 was more robust tan te oter two, Approac 3 appears to be more reliable for a practitioner to trust in tis context and peraps in general, wenever it is computationally feasible. Finally, it migt be wort empasizing again tat in tis section we ad not intended to resolve te puzzles of economic growt across countries as suc a study would require a larger data set and more variables. Our aim was just to give a concise and vivid illustration of te penomenon we discussed above and, in particular, to compare te tree approaces, not only for simulated data sets, but also for a real data set, in a context tat appears to be interesting for a wide audience.. Conclusion In tis article we ave pointed out and illustrated tat te reduction in variance or te efficiency gain due to smooting of te discrete regressors wit common bandwidt for te continuous variables across groups, as is frequently done in applied studies, can be well outweiged by te substantial bias introduced due to tis simple-smooting approac, bot for small and for relatively large sample cases. For suc cases, even fully separate estimation for eac group, if feasible, migt be preferred. We ave sown tat te complete-smooting tecnique by allowing different bandwidts for te continuous variables in eac group, could overcome tis difficulty and so is more robust tan te existing smooting metods. In general, weter it is better to smoot or not to smoot te discrete variable, or weter te bias beats te variance, essentially depends on te degree of difference of te DGPs in different groups: curvatures of te regression relationsip, variation in te error term for eac group, variation in te continuous regressors, size or proportion of one group relative to anoter in te sample, and so on. Te more robust complete-smooting approac proposed in tis paper is indeed a generalization of te seminal work by Racine and Li (, but at a cost of sligtly more computational complexity. Wen using te simple-smooting metod, one automatically (or implicitly imposes te assumption of a similar degree of smootness of te regression relationsips for different groups of te sample identified by te discrete variables, wic migt be far from reality. As we illustrated in bot simulated and empirical examples, suc a restriction can significantly deteriorate estimation results, increasing bias in te estimates of te true regression relationsip. Suc a problem can also substantially or even radically distort estimates of derivatives and te related estimates of marginal effects and elasticities, wic are used to draw policy implications. It is also important to recognize tat even from a teoretical point of view, te simple-smooting of te discrete variable is preferable and offers a suitable solution to te problem, it is still an open question in practice for some real data sets, weter we ave to smoot or not to smoot te discrete variables, particularly wen te computational cost of te extended metod is proibitive. A possible future topic is to extend te metodology to te case tat te dimensions of bot discrete and continuous covariates are multivariate. To avoid te curse of dimensionality issue in nonparametric estimation, inspired by te

16 Li, Simar and Zelenyuk/Computational Statistics and Data Analysis ( 3 5 recent work of Li et al. (3, we may consider te following regression model wit functional coefficients: Y i =β τ (Z i X i +ε i =β τ (Z ix c i+β τ (Z ix d i+ε i, were X i = (Xi c, Xd i wit Xc i R r being continuous and Xi d being a r -dimensional discrete vector,β ( andβ ( are functional coefficients wit dimensions r and r, respectively, and Z i = (Zi c, Zd i wit Zc i being univariate continuous covariate and Zi d being univariate discrete covariate. It would be interesting to apply te local linear complete smooting to estimate te functional coefficients in te above modelling framework and study its asymptotic teory and empirical application. Anoter possible future work is to develop and justify a metod (a specification test or a rule of tumb tat would elp justifying a decision weter to smoot or not to smoot over some or all discrete variables. Te issue of relevance of some categorical predictors in nonparametric regressions as been analyzed in Racine et al. ( by considering te ypotesis testing problem ofλ=. However, to te best of our knowledge, noting as been done for te oter extreme of te scale (λ=, including te issue of common bandwidts for te continuous variables. It would be also interesting to investigate weter te complete-smooting metod we proposed in tis paper can also improve te performance of various tests tat employ te simple-smooting metod (see, for example, Racine et al., ; Hsiao et al., Acknowledgements We would like to tank te editor, associate editor, and two referees for teir elpful comments, wic greatly improved te previous version of te paper. Tanks also go to Peter C. B. Pillips, Jeff Racine and oter colleagues wo commented on tis paper and participants of various conferences and seminars were tis work was presented. Te second autor and te tird autor acknowledge te financial support from ARC Discovery Grant (DP3, te IAP Researc Network P7/ of te Belgian State (Belgian Science Policy and te Scool of Economics and CEPA of Te University of Queensland. Only te autors and not te above mentioned institutions or people remain responsible for te views expressed. Appendix A. Assumptions In tis appendix, we give te regularity assumptions wic are sufficient to derive te asymptotic teory of te proposed approac in Section 3. Assumption. Let{(Y i, Z c i, Zd i } be independent and identically distributed (i.i.d. as (Y, Zc, Z d, and te error term ε i ave te (+δ moment witδ>. Assumption. Let K( be a continuous and symmetric probability density function wit a compact support. Assumption 3. Te conditional density function of Z c for given Z d, f (z c z d, is bounded away from infinity and zero for z c Z c and z d = z d ( or z d (, werez c is te compact support of Z c. Meanwile, m(, z d,σ (, z d and f ( z d are twice continuously differentiable onz c for z d = z d ( or z d (, wereσ (z c, z d =Var[ε Z= (z c, z d ]. Assumption. Let te bandwidts (, ( andλsatisfy ( (, [ n( ] [ n( ] andλ=o( ( (. Assumption. Let te bandwidts ( and ( satisfy [ n ǫ ( ] [ n ǫ ( ], ǫ< (δ+/(+δ, wereδis defined in Assumption. Furtermore, [n (] [n (]. 5

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

Financial Econometrics Prof. Massimo Guidolin

Financial Econometrics Prof. Massimo Guidolin CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis

More information

A New Diagnostic Test for Cross Section Independence in Nonparametric Panel Data Model

A New Diagnostic Test for Cross Section Independence in Nonparametric Panel Data Model e University of Adelaide Scool of Economics Researc Paper No. 2009-6 October 2009 A New Diagnostic est for Cross Section Independence in Nonparametric Panel Data Model Jia Cen, Jiti Gao and Degui Li e

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys American Journal of Teoretical and Applied Statistics 2018; 7(3): 92-101 ttp://www.sciencepublisinggroup.com/j/ajtas doi: 10.11648/j.ajtas.20180703.11 ISSN: 2326-8999 (Print); ISSN: 2326-9006 (Online)

More information

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,

More information

Bootstrap confidence intervals in nonparametric regression without an additive model

Bootstrap confidence intervals in nonparametric regression without an additive model Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

Kernel Density Based Linear Regression Estimate

Kernel Density Based Linear Regression Estimate Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency

More information

Bandwidth Selection in Nonparametric Kernel Testing

Bandwidth Selection in Nonparametric Kernel Testing Te University of Adelaide Scool of Economics Researc Paper No. 2009-0 January 2009 Bandwidt Selection in Nonparametric ernel Testing Jiti Gao and Irene Gijbels Bandwidt Selection in Nonparametric ernel

More information

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES

A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES Ronald Ainswort Hart Scientific, American Fork UT, USA ABSTRACT Reports of calibration typically provide total combined uncertainties

More information

Material for Difference Quotient

Material for Difference Quotient Material for Difference Quotient Prepared by Stepanie Quintal, graduate student and Marvin Stick, professor Dept. of Matematical Sciences, UMass Lowell Summer 05 Preface Te following difference quotient

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

New Distribution Theory for the Estimation of Structural Break Point in Mean

New Distribution Theory for the Estimation of Structural Break Point in Mean New Distribution Teory for te Estimation of Structural Break Point in Mean Liang Jiang Singapore Management University Xiaou Wang Te Cinese University of Hong Kong Jun Yu Singapore Management University

More information

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial

More information

Fast Exact Univariate Kernel Density Estimation

Fast Exact Univariate Kernel Density Estimation Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

REVIEW LAB ANSWER KEY

REVIEW LAB ANSWER KEY REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

Department of Econometrics and Business Statistics

Department of Econometrics and Business Statistics ISSN 1440-771X Australia Department of Econometrics and Business Statistics ttp://www.buseco.monas.edu.au/depts/ebs/pubs/wpapers/ Bayesian Bandwidt Estimation in Nonparametric Time-Varying Coefficient

More information

Pre-Calculus Review Preemptive Strike

Pre-Calculus Review Preemptive Strike Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING

EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING Statistica Sinica 13(2003), 641-653 EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING J. K. Kim and R. R. Sitter Hankuk University of Foreign Studies and Simon Fraser University Abstract:

More information

Taylor Series and the Mean Value Theorem of Derivatives

Taylor Series and the Mean Value Theorem of Derivatives 1 - Taylor Series and te Mean Value Teorem o Derivatives Te numerical solution o engineering and scientiic problems described by matematical models oten requires solving dierential equations. Dierential

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

Average Rate of Change

Average Rate of Change Te Derivative Tis can be tougt of as an attempt to draw a parallel (pysically and metaporically) between a line and a curve, applying te concept of slope to someting tat isn't actually straigt. Te slope

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

A Reconsideration of Matter Waves

A Reconsideration of Matter Waves A Reconsideration of Matter Waves by Roger Ellman Abstract Matter waves were discovered in te early 20t century from teir wavelengt, predicted by DeBroglie, Planck's constant divided by te particle's momentum,

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

Time (hours) Morphine sulfate (mg)

Time (hours) Morphine sulfate (mg) Mat Xa Fall 2002 Review Notes Limits and Definition of Derivative Important Information: 1 According to te most recent information from te Registrar, te Xa final exam will be eld from 9:15 am to 12:15

More information

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation Peiua Qiu Scool of Statistics University of Minnesota 313 Ford Hall 224 Curc St SE Minneapolis, MN 55455 Abstract

More information

New families of estimators and test statistics in log-linear models

New families of estimators and test statistics in log-linear models Journal of Multivariate Analysis 99 008 1590 1609 www.elsevier.com/locate/jmva ew families of estimators and test statistics in log-linear models irian Martín a,, Leandro Pardo b a Department of Statistics

More information

Differential Calculus (The basics) Prepared by Mr. C. Hull

Differential Calculus (The basics) Prepared by Mr. C. Hull Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit

More information

. If lim. x 2 x 1. f(x+h) f(x)

. If lim. x 2 x 1. f(x+h) f(x) Review of Differential Calculus Wen te value of one variable y is uniquely determined by te value of anoter variable x, ten te relationsip between x and y is described by a function f tat assigns a value

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 By Jiti Gao 2 and Maxwell King 3 Abstract We propose a simultaneous model specification procedure for te conditional

More information

Boosting Kernel Density Estimates: a Bias Reduction. Technique?

Boosting Kernel Density Estimates: a Bias Reduction. Technique? Boosting Kernel Density Estimates: a Bias Reduction Tecnique? Marco Di Marzio Dipartimento di Metodi Quantitativi e Teoria Economica, Università di Cieti-Pescara, Viale Pindaro 42, 65127 Pescara, Italy

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING

UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Statistica Sinica 15(2005), 73-98 UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Peter Hall 1 and Kee-Hoon Kang 1,2 1 Australian National University and 2 Hankuk University of Foreign Studies Abstract:

More information

MVT and Rolle s Theorem

MVT and Rolle s Theorem AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state

More information

Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series

Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series Te University of Adelaide Scool of Economics Researc Paper No. 2009-26 Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series Jiti Gao, Degui Li and Dag Tjøsteim Te University of

More information

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA EXAMINATION MODULE 5

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA EXAMINATION MODULE 5 THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA EXAMINATION NEW MODULAR SCHEME introduced from te examinations in 009 MODULE 5 SOLUTIONS FOR SPECIMEN PAPER B THE QUESTIONS ARE CONTAINED IN A SEPARATE FILE

More information

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point

1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note

More information

Teaching Differentiation: A Rare Case for the Problem of the Slope of the Tangent Line

Teaching Differentiation: A Rare Case for the Problem of the Slope of the Tangent Line Teacing Differentiation: A Rare Case for te Problem of te Slope of te Tangent Line arxiv:1805.00343v1 [mat.ho] 29 Apr 2018 Roman Kvasov Department of Matematics University of Puerto Rico at Aguadilla Aguadilla,

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

One-Sided Position-Dependent Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meshes

One-Sided Position-Dependent Smoothness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meshes DOI 10.1007/s10915-014-9946-6 One-Sided Position-Dependent Smootness-Increasing Accuracy-Conserving (SIAC) Filtering Over Uniform and Non-uniform Meses JenniferK.Ryan Xiaozou Li Robert M. Kirby Kees Vuik

More information

Fast optimal bandwidth selection for kernel density estimation

Fast optimal bandwidth selection for kernel density estimation Fast optimal bandwidt selection for kernel density estimation Vikas Candrakant Raykar and Ramani Duraiswami Dept of computer science and UMIACS, University of Maryland, CollegePark {vikas,ramani}@csumdedu

More information

Printed Name: Section #: Instructor:

Printed Name: Section #: Instructor: Printed Name: Section #: Instructor: Please do not ask questions during tis exam. If you consider a question to be ambiguous, state your assumptions in te margin and do te best you can to provide te correct

More information

Click here to see an animation of the derivative

Click here to see an animation of the derivative Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,

More information

Chapter 2 Limits and Continuity

Chapter 2 Limits and Continuity 4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for

More information

HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS

HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS HOW TO DEAL WITH FFT SAMPLING INFLUENCES ON ADEV CALCULATIONS Po-Ceng Cang National Standard Time & Frequency Lab., TL, Taiwan 1, Lane 551, Min-Tsu Road, Sec. 5, Yang-Mei, Taoyuan, Taiwan 36 Tel: 886 3

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

A.P. CALCULUS (AB) Outline Chapter 3 (Derivatives)

A.P. CALCULUS (AB) Outline Chapter 3 (Derivatives) A.P. CALCULUS (AB) Outline Capter 3 (Derivatives) NAME Date Previously in Capter 2 we determined te slope of a tangent line to a curve at a point as te limit of te slopes of secant lines using tat point

More information

Artificial Neural Network Model Based Estimation of Finite Population Total

Artificial Neural Network Model Based Estimation of Finite Population Total International Journal of Science and Researc (IJSR), India Online ISSN: 2319-7064 Artificial Neural Network Model Based Estimation of Finite Population Total Robert Kasisi 1, Romanus O. Odiambo 2, Antony

More information

Polynomials 3: Powers of x 0 + h

Polynomials 3: Powers of x 0 + h near small binomial Capter 17 Polynomials 3: Powers of + Wile it is easy to compute wit powers of a counting-numerator, it is a lot more difficult to compute wit powers of a decimal-numerator. EXAMPLE

More information

Boosting local quasi-likelihood estimators

Boosting local quasi-likelihood estimators Ann Inst Stat Mat (00) 6:5 48 DOI 0.007/s046-008-07-5 Boosting local quasi-likeliood estimators Masao Ueki Kaoru Fueda Received: Marc 007 / Revised: 8 February 008 / Publised online: 5 April 008 Te Institute

More information

Cubic Functions: Local Analysis

Cubic Functions: Local Analysis Cubic function cubing coefficient Capter 13 Cubic Functions: Local Analysis Input-Output Pairs, 378 Normalized Input-Output Rule, 380 Local I-O Rule Near, 382 Local Grap Near, 384 Types of Local Graps

More information

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if Computational Aspects of its. Keeping te simple simple. Recall by elementary functions we mean :Polynomials (including linear and quadratic equations) Eponentials Logaritms Trig Functions Rational Functions

More information

Robust Average Derivative Estimation. February 2007 (Preliminary and Incomplete Do not quote without permission)

Robust Average Derivative Estimation. February 2007 (Preliminary and Incomplete Do not quote without permission) Robust Average Derivative Estimation Marcia M.A. Scafgans Victoria inde-wals y February 007 (Preliminary and Incomplete Do not quote witout permission) Abstract. Many important models, suc as index models

More information

MATH 1020 Answer Key TEST 2 VERSION B Fall Printed Name: Section #: Instructor:

MATH 1020 Answer Key TEST 2 VERSION B Fall Printed Name: Section #: Instructor: Printed Name: Section #: Instructor: Please do not ask questions during tis exam. If you consider a question to be ambiguous, state your assumptions in te margin and do te best you can to provide te correct

More information

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225 THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Mat 225 As we ave seen, te definition of derivative for a Mat 111 function g : R R and for acurveγ : R E n are te same, except for interpretation:

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE U N I V E R S I T Ä T H A M B U R G A note on residual-based empirical likeliood kernel density estimation Birte Musal and Natalie Neumeyer Preprint No. 2010-05 May 2010 DEPARTMENT MATHEMATIK SCHWERPUNKT

More information

2.1 THE DEFINITION OF DERIVATIVE

2.1 THE DEFINITION OF DERIVATIVE 2.1 Te Derivative Contemporary Calculus 2.1 THE DEFINITION OF DERIVATIVE 1 Te grapical idea of a slope of a tangent line is very useful, but for some uses we need a more algebraic definition of te derivative

More information

Bootstrap prediction intervals for Markov processes

Bootstrap prediction intervals for Markov processes arxiv: arxiv:0000.0000 Bootstrap prediction intervals for Markov processes Li Pan and Dimitris N. Politis Li Pan Department of Matematics University of California San Diego La Jolla, CA 92093-0112, USA

More information

5.1 We will begin this section with the definition of a rational expression. We

5.1 We will begin this section with the definition of a rational expression. We Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go

More information

Quantum Theory of the Atomic Nucleus

Quantum Theory of the Atomic Nucleus G. Gamow, ZP, 51, 204 1928 Quantum Teory of te tomic Nucleus G. Gamow (Received 1928) It as often been suggested tat non Coulomb attractive forces play a very important role inside atomic nuclei. We can

More information

Journal of Computational and Applied Mathematics

Journal of Computational and Applied Mathematics Journal of Computational and Applied Matematics 94 (6) 75 96 Contents lists available at ScienceDirect Journal of Computational and Applied Matematics journal omepage: www.elsevier.com/locate/cam Smootness-Increasing

More information

The Laws of Thermodynamics

The Laws of Thermodynamics 1 Te Laws of Termodynamics CLICKER QUESTIONS Question J.01 Description: Relating termodynamic processes to PV curves: isobar. Question A quantity of ideal gas undergoes a termodynamic process. Wic curve

More information

Quantum Numbers and Rules

Quantum Numbers and Rules OpenStax-CNX module: m42614 1 Quantum Numbers and Rules OpenStax College Tis work is produced by OpenStax-CNX and licensed under te Creative Commons Attribution License 3.0 Abstract Dene quantum number.

More information

Notes on wavefunctions II: momentum wavefunctions

Notes on wavefunctions II: momentum wavefunctions Notes on wavefunctions II: momentum wavefunctions and uncertainty Te state of a particle at any time is described by a wavefunction ψ(x). Tese wavefunction must cange wit time, since we know tat particles

More information

A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation

A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation A Locally Adaptive Transformation Metod of Boundary Correction in Kernel Density Estimation R.J. Karunamuni a and T. Alberts b a Department of Matematical and Statistical Sciences University of Alberta,

More information

Estimation of boundary and discontinuity points in deconvolution problems

Estimation of boundary and discontinuity points in deconvolution problems Estimation of boundary and discontinuity points in deconvolution problems A. Delaigle 1, and I. Gijbels 2, 1 Department of Matematics, University of California, San Diego, CA 92122 USA 2 Universitair Centrum

More information

Uniform Convergence Rates for Nonparametric Estimation

Uniform Convergence Rates for Nonparametric Estimation Uniform Convergence Rates for Nonparametric Estimation Bruce E. Hansen University of Wisconsin www.ssc.wisc.edu/~bansen October 2004 Preliminary and Incomplete Abstract Tis paper presents a set of rate

More information

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series

Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Stationary Gaussian Markov Processes As Limits of Stationary Autoregressive Time Series Lawrence D. Brown, Pilip A. Ernst, Larry Sepp, and Robert Wolpert August 27, 2015 Abstract We consider te class,

More information

Kernel Density Estimation

Kernel Density Estimation Kernel Density Estimation Univariate Density Estimation Suppose tat we ave a random sample of data X 1,..., X n from an unknown continuous distribution wit probability density function (pdf) f(x) and cumulative

More information

1. Which one of the following expressions is not equal to all the others? 1 C. 1 D. 25x. 2. Simplify this expression as much as possible.

1. Which one of the following expressions is not equal to all the others? 1 C. 1 D. 25x. 2. Simplify this expression as much as possible. 004 Algebra Pretest answers and scoring Part A. Multiple coice questions. Directions: Circle te letter ( A, B, C, D, or E ) net to te correct answer. points eac, no partial credit. Wic one of te following

More information

Chapter 2 Ising Model for Ferromagnetism

Chapter 2 Ising Model for Ferromagnetism Capter Ising Model for Ferromagnetism Abstract Tis capter presents te Ising model for ferromagnetism, wic is a standard simple model of a pase transition. Using te approximation of mean-field teory, te

More information

Regression Discontinuity and Heteroskedasticity Robust Standard Errors: Evidence from a Fixed-Bandwidth Approximation

Regression Discontinuity and Heteroskedasticity Robust Standard Errors: Evidence from a Fixed-Bandwidth Approximation DISCUSSION PAPER SERIES IZA DP No. 56 Regression Discontinuity and Heteroskedasticity Robust Standard Errors: Evidence from a Fixed-Bandwidt Approximation Otávio Bartalotti MAY 28 DISCUSSION PAPER SERIES

More information

MTH 119 Pre Calculus I Essex County College Division of Mathematics Sample Review Questions 1 Created April 17, 2007

MTH 119 Pre Calculus I Essex County College Division of Mathematics Sample Review Questions 1 Created April 17, 2007 MTH 9 Pre Calculus I Essex County College Division of Matematics Sample Review Questions Created April 7, 007 At Essex County College you sould be prepared to sow all work clearly and in order, ending

More information

Nonparametric regression on functional data: inference and practical aspects

Nonparametric regression on functional data: inference and practical aspects arxiv:mat/0603084v1 [mat.st] 3 Mar 2006 Nonparametric regression on functional data: inference and practical aspects Frédéric Ferraty, André Mas, and Pilippe Vieu August 17, 2016 Abstract We consider te

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

INTRODUCTION TO CALCULUS LIMITS

INTRODUCTION TO CALCULUS LIMITS Calculus can be divided into two ke areas: INTRODUCTION TO CALCULUS Differential Calculus dealing wit its, rates of cange, tangents and normals to curves, curve sketcing, and applications to maima and

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems Comp. Part. Mec. 04) :357 37 DOI 0.007/s4057-04-000-9 Optimal parameters for a ierarcical grid data structure for contact detection in arbitrarily polydisperse particle systems Dinant Krijgsman Vitaliy

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

Mathematics 105 Calculus I. Exam 1. February 13, Solution Guide

Mathematics 105 Calculus I. Exam 1. February 13, Solution Guide Matematics 05 Calculus I Exam February, 009 Your Name: Solution Guide Tere are 6 total problems in tis exam. On eac problem, you must sow all your work, or oterwise torougly explain your conclusions. Tere

More information