7 Semiparametric Methods and Partially Linear Regression

Size: px
Start display at page:

Download "7 Semiparametric Methods and Partially Linear Regression"

Transcription

1 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric). All moment condition models are semiparametric in te sense tat te distribution of te data () is unspeci ed and in nite dimensional. But te settings more typically called semiparametric are tose were tere is explicit estimation of : In many contexts te nonparametric part is a conditional mean, variance, density or distribution function. Often is te parameter of interest, and is a nuisance parameter, but tis is not necessarily te case. In many semiparametric contexts, is estimated rst, and ten ^ is a two-step estimator. But in oter contexts (; ) are jointly estimated. 7.2 Feasible Nonparametric GLS A classic semiparametric model, wic is not in Li-Racine, is feasible GLS wit unknown variance function. Te seminal papers are Carroll (982, Annals of Statistics) and Robinson (987, Econometrica). Te setting is a linear regression y i = X i + e i E (e i j X i ) = E e 2 i j X i = 2 (X i ) were te variance function 2 (x) is unknown but smoot in x; and x 2 R q : (Te idea also applies to non-linear but parametric regression functions). parameter is = 2 () In tis model, te nonparametric nuisance As te model is a regression, te e ciency bound for is attained by GLS regression ~ =! 2 (X i ) X ixi! 2 (X i ) X iy i : Tis of course is infeasible. Carroll and Robinson suggested replacing 2 (X i ) wit ^ 2 (X i ) were ^ 2 (x) is a nonparametric estimator. (Carroll used kernel metods; Robinson used nearest neigbor metods.) estimator Speci cally, letting ^ 2 (x) be te NW estimator of 2 (x); we can de ne te feasible ^ =! ^ 2 (X i ) X ixi! ^ 2 (X i ) X iy i : Tis seems sensible. Te question is nd its asymptotic distribution, and in particular to nd 56

2 if it is asymptotically equivalent to ~ : 7.3 Generated Regressors Te model is y i = (X i ) + e i E (e i j X i ) = were is nite dimensional but is an unknown function. Suppose is identi ed by anoter equation so tat we ave a consistent estimate of ^(x) for (x):(imagine a non-parametric Heckman estimator). Ten we could estimate by least-squares of y i on ^(Z i ): Tis problem is called generated regressors, as te regressor is a (consistent) estimate of a infeasible regressor. In general, ^ is consistent. But wat is its distribution? 7.4 Andrews MINPIN Teory A useful framework to study te type of problem from te previous section is given in Andrews (Econometrica, 994). It is reviewed in section 7.3 of Li-Racine but te discussion is incomplete and tere is at least one important omission. reading Andrews paper. If you really want to learn tis teory I suggest Te setting is wen te estimator ^ MINimies a criterion function wic depends on a Preliminary In nite dimensional Nuisance parameter estimator, ence MINPIN. Let 2 be te parameter of interest, let 2 T denote te in nite-dimensional nuisance parameter. Let and denote te true values. Let ^ be a rst-step estimate of ; and assume tat it is consistent: ^! p Now suppose tat te criterion function for estimation of depends on te rst-step estimate ^: Let te criterion be Q n (; ) and suppose tat Tus ^ is a two-step estimator. Assume tat ^ = argmin Q n Q n (; ) = m n (; ) + o p pn m n (; ) = m i (; ) n 57

3 were m i (; ) is a function of te i t observation. In just-identi ed models, tere is no o p pn error (and we now ignore te presence of tis error). In te FGLS example, () = 2 () ; ^ () = ^ 2 () ; and In te generated regressor problem, m i (; ) = (X i ) X i y i X i : m i (; ) = (X i ) (y i (X i )) : In general, te rst-order condition (FOC) for ^ is = m n ^; ^ Assume ^! p : Implicit to obtain consistency is te requirement tat te population expecation of te FOC is ero, namely Em i ( ; ) = and we assume tat tis is te case. We expand te FOC in te rst argument = p n m n ^; ^ = p n m n ( ; ^) + M n ( ; ^) p n ^ + o p () were It follows tat p n ^ M n (; m n (; ) ' M n ( ; ^) p n m n ( ; ^) : If M n (; ) converges to its expectation M (; ) m i (; ) uniformly in its arguments, ten M n ( ; ^)! p M ( ; ) = M; say. Ten p n ^ ' M p n m n ( ; ^) : We cannot take a Taylor expansion in because it is in nite dimensional. Instead, Andrews uses a stocastic equicontinuity argument. De ne te population version of m n (; ) m (; ) = Em i (; ) : 58

4 Note tat at te true values m ( ; ) = (as discussed above) but te function is non-ero for generic values. De ne te function n () = p n ( m n ( ; ) m ( ; )) = p (m i ( ; ) Em i ( ; )) n Notice tat n () is a normalied sum of mean-ero random variables. Tus for any xed ; n () converges to a normal random vector. Viewed as a function of ; we migt expect n () to vary smootly in te argument : Te stocastic formulation of tis is called stocastic equicontinuity. Rougly, as n! ; n () remains well-beaved as a function of. Andrews rst key assumption is tat n () is stocastically equicontinuous. Te important implication of stocastic equicontinuity is tat ^! p implies Intuitively, if g() is continuous, ten g(^) n (^) n ( )! p : probability uniformly to a continuous function g() ten g n (^) equicontinuity is te most general, but still as te same implication. g( )! p : More generally, if g n () converges in g( )! p : Te case of stocastic Since Em i ( ; ) = ; wen we evaluate te empirical process at te true value we ave a ero-mean normalied sum, wic is asymptotically normal by te CLT: n ( ) = p n = p n (m i ( ; ) Em i ( ; )) m i ( ; )! d N (; ) were It follows tat = Em i ( ; ) m i ( ; ) n (^) = n ( ) + o p ()! d N (; ) and tus p n mn ( ; ^) = p n ( m n ( ; ^) m ( ; ^)) + p nm ( ; ^) = n (^) + p nm ( ; ^) 59

5 Te nal detail is wat to do wit p nm ( ; ^) : Andrews directly assumes tat p nm ( ; ^)! p Tis is te second key assumption, and we discuss it below. Under tis assumption, p n mn ( ; ^)! d N (; ) and combining tis wit our earlier expansion, we obtain: Andrews MINPIN Teorem. Under Assumptions -6 below, p n ^! p N (; V ) were V = M M M m i ( ; ) = Em i ( ; ) m i ( ; ) Assumption As n! ; for m (; ) = Em i (; ). ^! p 2. ^! p 3. p nm ( ; ^)! p 4. m ( ; ) = 5. n () is stocastically equicontinuous at : 6. m n (; m n (; ) satisfy uniform WLLN s over T: (Tey converge in probability to teir expectations, uniformly over te parameter space.) Discussion of result: Te teorem says tat te semiparametric estimator ^ as te same asymptotic distribution as te idealied estimator obtained by replacing te nonparametric estimate ^ wit te true function : Tus te estimator is adaptive. Tis migt seem too good to be true. Te key is assumption, wic olds in some cases, but not in oters. Discussion of assumptions. Assumptions and 2 state tat te estimators are consistent, wic sould be separately veri ed. Assumption 4 states tat te FOC identi es wen evaluated at te true : Assumptions 5 and 6 are regularity conditions, essentially smootness of te underlying functions, plus su cient moments. 6

6 7.5 Ortogonality Assumption Te key assumption 3 for Andrews MINPIN teorey was someow missed in te write-up in Li-Racine. Assumption 3 is not always true, and is not just a regularity condition. It requires a sort of ortogonality between te estimation of and : Suppose tat is nite-dimensional. Ten by a Taylor expansion p nm ( ; ^) ' p nm ( ; m ( ; ) p n (^ m ( ; ) p n (^ ) since m ( ; ) = by Assumption 4. Since is parametric, we sould expect p n (^ ) to converge to a normal vector. Tus tis expression will converge in probability to ero m ( ; ) = Recall tat m (; ) is te expectation of te FOC, wic is te derivative of te criterion : Tus m (; ) is te cross-derivative of te criterion, and te above statement is tat cross-derivative is ero, wic is an ortogonality condition (e.g. block diagonality of te Hessian.) Now wen is in nite-dimensional, te above argument does not work, but it lends intuition. An analog of te derivative condition, wic is su cient for Assumption 3, is tat for all in a neigborood of : In te FGLS example, m ( ; ) = m ( ; ) = E (X i ) X ie i = for all ; so tis is Assumption 3 is satis ed in tis example. We ave te implication: Teorem. Under regularity conditions te Feasible nonparametric GLS estimator of te previous section is asymptotically equivalent to infeasible GLS. In te generated regressor example m ( ; ) = E ((X i ) (y i (X i ))) = E ((X i ) (e i + ( (X i ) (X i )))) = E ((X i ) ( (X i ) (X i ))) Z = (x) ( (x) (x)) f(x)dx Assumption 3 requires p nm ( ; ^)! p. But note tat p nm ( ; ^) ' p n ( (x) ^(x)) wic certainly does not converge to ero. Assumption 3 is generically violated wen tere are generated regressors. 6

7 Tere is one interesting exception. Wen = ten m ( ; ) = and tus p nm ( ; ^) = so Assumption 3 is satisi ed. We see tat Andrews MINPIN assumption do not apply in all semiparametric models. Only tose wic satisfy Assumption 3, wic needs to be veri ed. Te oter key condition is stocastic equicontinuity, wic is di cult to verify but is genearly satis ed for well-beaved estimators. Te remaining assumptions are smootness and regularity conditions, and typically are not of concern in applications. 7.6 Partially Linear Regression Model Te semiparametric partially linear regression model is E (e i j X i ; Z i ) = y i = X i + g (Z i ) + e i E e 2 i j X i = x; Z i = = 2 (x; ) Tat is, te regressors are (X i ; Z i ); and te model speci es te conditional mean as linear in X i but possibly non-linear in Z i 2 R q : Tis is a very useful compromise between fully nonparametric and fully parametric. Often te binary (dummy) variables are put in X i : Often tere is just one nonlinear variable: q = ; to keep tings simple. Te goal is to estimate and g; and to obtain con dence intervals. Te rst issue to consider is identi cation. Since g is unconstrained, te elements of X i cannot be collinear wit any function of Z i : Tis means tat we must exclude from X i intercepts and any deterministic function of Z i : Te function g includes tese components. 7.7 Robinson s Transformation Robinson (Econometrica, 988) is te seminal treatment of te partially linear model. He rst contribution is to sow tat we can concentrate out te unknown g by using a genearliation of residual regression. Take te equation y i = Xi + g (Z i ) + e i and apply te conditional expectation operator E ( j Z i ) : We obtain E (y i j Z i ) = E X i j Z i + E (g (Zi ) j Z i ) + E (e i j Z i ) = E (X i j Z i ) + g (Z i ) 62

8 (using te law of iterated expectations). De ning te conditional expectations g y () = E (y i j Z i = ) g x () = E (X i j Z i = ) We can write tis expression as g y () = g x () + g() Subtracting from te original equation, te function g disappears: We can write tis as y i g y (Z i ) = (X i g x (Z i )) + e i e yi = e xi + e i y i = g y (Z i ) + e yi X i = g x (Z i ) + e xi Tat is, is te coe cient of te regression of e yi on e xi ; were tese are te conditional expectations errors from te regression of y i (and X i ) on Z i only. Tis is a conditional expecation generaliation of te idea of residual regression. Tis transformed equation immediately suggests an infeasible estimator for ; by LS of e yi on e xi : ~ = 7.8 Robinson s Estimator! e xi e xi e xi e yi Robinson suggested rst estimating g y and g x by NW regression, using tese to obtain residuals ^e xi and ^e xi ; and replacing tese in te formula for : ~ Speci cally, let ^g y () and ^g x () denote te NW estimates of te conditional mean of y i and X i given Z i = : Assuming q = ; P n k Zi ^g y () = P n k Zi P n k Zi ^g x () = P n k Zi Te estimator ^g x () is a vector of te same dimension as X i. 63 y i X i

9 Notice tat you are regressing eac variable (te y i and te Xi s) separately on te continuous variable Z i : You sould view eac of tese regressions as a separate NW regression. You sould probably use a di erent bandwidt for eac of tese regressions, as te depdendence on Z i will depend on te variable. (For example, some regressors X i migt be independent of Z i so you would want to use an in nite bandwidt for tose cases.) Wile Robinson discussed NW regression, tis is not essential. You could substitute LL or WNW instead. Given te regression functions, we obtain te regression residuals Our rst attempt at an estimator for is ten ^e yi = y i ^g y (Z i ) ^e xi = X i ^g x (Z i ) ^ =! ^e xi^e xi ^e xi^e yi 7.9 Trimming Te asymptotic teory for semiparametric estimators, typically requires tat te rst step estimator converges uniformly at some rate. Te di culty is tat ^g y () and ^g x () do not converge uniformly over unbounded sets. Equivalently, te problem is due to te estimated density of Z i in te denominator. Anoter way of viewing te problem is tat tese estimates are quite noisy in sparse regions of te sample space, so residuals in suc regions are noisy, and tis could unduely in uence te estimate of : su er a problem tat tey can be unduly in uence by unstable residuals from observations in sparse regions of te sample space. Te nonparametric regression estimates depend inversely on te density estimate ^f () = n Zi k For values of were f () is close to ero, ^f () is not bounded away from ero, so te NW estimates at tis point can be poor. Consequently te residuals for observations i suc tat f (Z i ) will be quite unreliable, and can ave an undue in uence on ^: A standard solution is to introduce trimming. Let ^f () = n Zi k let b > be a trimming constant and let i (b) = ^f (Z i ) b tose observations for wic te estimated density of Z i is above b: : ; denote a indicator variable for 64

10 Te trimmed version of ^ is ^ =! ^e xi^e xi i (b) ^e xi^e yi i (b) Tis is a trimmed LS residual regression. Te asymptotic teory requires tat b = b n! but unfortunately tere is not good guidance about ow to select b in practice. Often trimming is ignored in applications. One practical suggestion is to estimate wit and witout trimming to assess robustness. 7. Asymptotic Distribution Robinson (988), Andrews (994) and Li (996) are references. Te needed regularity conditions are tat te data are iid, Z i as a density, and te regression functions, density, and conditional variance function are su ciently smoot wit respect to teir arguments. Assuming a second-order kernel, and for simplicity writing = = = q, te important condition on te bandwidts sequence is p n 4 + n q! Tecnically, tis is not quite enoug, as tis ignores te interaction wit te trimming parameter b: But since tis can be set b n! at an extremely slow rate, it can be safely ignored. Te above condition is similar to te standard convergence rates for nonparametric estimation, multiplied by p n: Equivalently, wat is essential is tat te uniform MSE of te nonparametric estimators converge faster tan p n; or tat te estimators temselves converges faster tan n =4 : Tat is, wat we need is n =4 sup j^g y () g y ()j! p n =4 sup j^g x () g x ()j! p From te teory for nonparametric regression, tese rates old wen bandwidts are picked optimally and q 3: In practice, q 3 is probably su cient. If q > 3 is desired, ten iger-order kernels can be used to improve te rate of convergence. So long as te rate is faster tan n =4 ; te following result applies. Teorem (Robinson). Under regularity conditions, including q 3; te trimmed estimator satis es p n ^! d N (; V ) V = E e xi e xi E exi e xi 2 (X i ; Z i ) E exi e xi Tat is, ^ is asymptotically equivalent to te infeasible estimator ~ : Te variance matrix may be estimated using conventional LS metods. 65

11 7. Veri cation of Andrews MINPIN Condition Tis Teorem states tat Robinson s two-step estimator for is asymptotically equivalent to te infeasible one-step estimator. Tis is an example of te application of Andrews MINPIN teory. Andrews speci cally mentions tat te n =4 convergence rates for ^g y () and ^g x () are essential to obtain tis result. To see tis, note tat te estimator ^ solves te FOC n (X i ^g x (Z i )) y i ^g y (Z i ) ^ (Xi ^g x (Z i )) = In Andrews MINPIN notation, let x = ^g x and y regression estimates, ten Since = ^g y denote xed (function) values of te m i ( ; ) = (X i x (Z i )) y i y (Z i ) (X i x (Z i )) y i = g y (Z i ) + (X i g x (Z i )) + e i ten E y i y (Z i ) (X i x (Z i )) j X i ; Z i = gy (Z i ) y (Z i ) (g x (Z i ) x (Z i ) +) and m ( ; ) = Em i ( ; ) = EE (m i ( ; ) j X i ; Z i ) = E (X i x (Z i )) g y (Z i ) y (Z i ) (g x (Z i ) x (Z i )) = E (g x (Z i ) x (Z i )) g y (Z i ) y (Z i ) (g x (Z i ) x (Z i )) Z = (g x () x ()) g y () y () (g x () x ()) f ()d were te second-to-last line uses conditional expecations given X i. Ten replacing x wit ^g x and y wit ^g y Z p nm ( ; ^) = (g x () ^g x ()) g y () ^g y () (g x () ^g x ()) f ()d Taking bounds p nm ( ; ^) sup jg x () ^g x ()j sup jg y () ^g y ()j + sup jg x () ^g x ()j 2 sup jf y ()j! p wen te nonparametric regression estimates converge faster tan n =4 : 66

12 Indeed, we see tat te o(n =4 ) convergence rates imply te key condition p nm ( ; ^)! p : 7.2 Estimation of Nonparametric Component Recall tat te model is y i = Xi + g (Z i ) + e i and te goal is to estimate and g: We ave described Robinson s estimator for : We now discuss estimation of g: Since ^ converges at rate n =2 wic is faster tan a nonparametric rate, we can simply pretend tat is known, and do nonparametric regression of y i Xi ^ on Z i ^g () = P n k Zi y i Xi ^ P n k Zi Te bandwidt = ( ; :::; q ) is distinct from tose for te rst-stage regressions. It is not ard to see tat tis estimator is asymptotically equivalent to te infeasible regressor wen ^ is replaced wit te true : Standard errors for ^g(x) may be computed as for standard nonparametric regression. 7.3 Bandwidt Selection In a semiparametric context, it is important to study te e ect a bandwidt as on te performance of te estimator of interest before determining te bandwidt. In many cases, tis requires a nonconventional bandwidt rate. However, tis problem does not occur in te partially linear model. Te rst-step bandwidts used for ^g y () and ^g x () are inputs for calculation of ^: Te goal is presumably accurate estimation of : Te bandwit impacts te teory for ^ troug te uniform convergence rates for ^g y () and ^g x () suggesting tat we use conventional bandwidt rules, e.g. cross-validation. 67

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

13 Endogeneity and Nonparametric IV

13 Endogeneity and Nonparametric IV 13 Endogeneity and Nonparametric IV 13.1 Nonparametric Endogeneity A nonparametric IV equation is Y i = g (X i ) + e i (1) E (e i j i ) = 0 In this model, some elements of X i are potentially endogenous,

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

3.4 Worksheet: Proof of the Chain Rule NAME

3.4 Worksheet: Proof of the Chain Rule NAME Mat 1170 3.4 Workseet: Proof of te Cain Rule NAME Te Cain Rule So far we are able to differentiate all types of functions. For example: polynomials, rational, root, and trigonometric functions. We are

More information

Should We Go One Step Further? An Accurate Comparison of One-Step and Two-Step Procedures in a Generalized Method of Moments Framework

Should We Go One Step Further? An Accurate Comparison of One-Step and Two-Step Procedures in a Generalized Method of Moments Framework Sould We Go One Step Furter? An Accurate Comparison of One-Step and wo-step Procedures in a Generalized Metod of Moments Framework Jungbin Hwang and Yixiao Sun Department of Economics, University of California,

More information

Financial Econometrics Prof. Massimo Guidolin

Financial Econometrics Prof. Massimo Guidolin CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis

More information

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

How to Find the Derivative of a Function: Calculus 1

How to Find the Derivative of a Function: Calculus 1 Introduction How to Find te Derivative of a Function: Calculus 1 Calculus is not an easy matematics course Te fact tat you ave enrolled in suc a difficult subject indicates tat you are interested in te

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

CS522 - Partial Di erential Equations

CS522 - Partial Di erential Equations CS5 - Partial Di erential Equations Tibor Jánosi April 5, 5 Numerical Di erentiation In principle, di erentiation is a simple operation. Indeed, given a function speci ed as a closed-form formula, its

More information

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim Mat 311 - Spring 013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, 013 Question 1. [p 56, #10 (a)] 4z Use te teorem of Sec. 17 to sow tat z (z 1) = 4. We ave z 4z (z 1) = z 0 4 (1/z) (1/z

More information

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,

NUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example, NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing

More information

Robust Average Derivative Estimation. February 2007 (Preliminary and Incomplete Do not quote without permission)

Robust Average Derivative Estimation. February 2007 (Preliminary and Incomplete Do not quote without permission) Robust Average Derivative Estimation Marcia M.A. Scafgans Victoria inde-wals y February 007 (Preliminary and Incomplete Do not quote witout permission) Abstract. Many important models, suc as index models

More information

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u

More information

Kernel Density Based Linear Regression Estimate

Kernel Density Based Linear Regression Estimate Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency

More information

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error.

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error. Lecture 09 Copyrigt by Hongyun Wang, UCSC Recap: Te total error in numerical differentiation fl( f ( x + fl( f ( x E T ( = f ( x Numerical result from a computer Exact value = e + f x+ Discretization error

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

2.3 Product and Quotient Rules

2.3 Product and Quotient Rules .3. PRODUCT AND QUOTIENT RULES 75.3 Product and Quotient Rules.3.1 Product rule Suppose tat f and g are two di erentiable functions. Ten ( g (x)) 0 = f 0 (x) g (x) + g 0 (x) See.3.5 on page 77 for a proof.

More information

MATH745 Fall MATH745 Fall

MATH745 Fall MATH745 Fall MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

Combining functions: algebraic methods

Combining functions: algebraic methods Combining functions: algebraic metods Functions can be added, subtracted, multiplied, divided, and raised to a power, just like numbers or algebra expressions. If f(x) = x 2 and g(x) = x + 2, clearly f(x)

More information

Bootstrap confidence intervals in nonparametric regression without an additive model

Bootstrap confidence intervals in nonparametric regression without an additive model Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap

More information

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my course tat I teac ere at Lamar University. Despite te fact tat tese are my class notes, tey sould be accessible to anyone wanting to learn or needing a refreser

More information

The derivative function

The derivative function Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative

More information

Math 312 Lecture Notes Modeling

Math 312 Lecture Notes Modeling Mat 3 Lecture Notes Modeling Warren Weckesser Department of Matematics Colgate University 5 7 January 006 Classifying Matematical Models An Example We consider te following scenario. During a storm, a

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

Kernel Density Estimation

Kernel Density Estimation Kernel Density Estimation Univariate Density Estimation Suppose tat we ave a random sample of data X 1,..., X n from an unknown continuous distribution wit probability density function (pdf) f(x) and cumulative

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

A Flexible Nonparametric Test for Conditional Independence

A Flexible Nonparametric Test for Conditional Independence A Flexible Nonparametric Test for Conditional Independence Meng Huang Freddie Mac Yixiao Sun and Halbert Wite UC San Diego August 5, 5 Abstract Tis paper proposes a nonparametric test for conditional independence

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225

THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Math 225 THE IDEA OF DIFFERENTIABILITY FOR FUNCTIONS OF SEVERAL VARIABLES Mat 225 As we ave seen, te definition of derivative for a Mat 111 function g : R R and for acurveγ : R E n are te same, except for interpretation:

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL IFFERENTIATION FIRST ERIVATIVES Te simplest difference formulas are based on using a straigt line to interpolate te given data; tey use two data pints to estimate te derivative. We assume tat

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here!

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here! Precalculus Test 2 Practice Questions Page Note: You can expect oter types of questions on te test tan te ones presented ere! Questions Example. Find te vertex of te quadratic f(x) = 4x 2 x. Example 2.

More information

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2 MTH - Spring 04 Exam Review (Solutions) Exam : February 5t 6:00-7:0 Tis exam review contains questions similar to tose you sould expect to see on Exam. Te questions included in tis review, owever, are

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006

Math 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2

More information

Math 2921, spring, 2004 Notes, Part 3. April 2 version, changes from March 31 version starting on page 27.. Maps and di erential equations

Math 2921, spring, 2004 Notes, Part 3. April 2 version, changes from March 31 version starting on page 27.. Maps and di erential equations Mat 9, spring, 4 Notes, Part 3. April version, canges from Marc 3 version starting on page 7.. Maps and di erential equations Horsesoe maps and di erential equations Tere are two main tecniques for detecting

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

Math 1241 Calculus Test 1

Math 1241 Calculus Test 1 February 4, 2004 Name Te first nine problems count 6 points eac and te final seven count as marked. Tere are 120 points available on tis test. Multiple coice section. Circle te correct coice(s). You do

More information

REVIEW LAB ANSWER KEY

REVIEW LAB ANSWER KEY REVIEW LAB ANSWER KEY. Witout using SN, find te derivative of eac of te following (you do not need to simplify your answers): a. f x 3x 3 5x x 6 f x 3 3x 5 x 0 b. g x 4 x x x notice te trick ere! x x g

More information

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter

More information

Taylor Series and the Mean Value Theorem of Derivatives

Taylor Series and the Mean Value Theorem of Derivatives 1 - Taylor Series and te Mean Value Teorem o Derivatives Te numerical solution o engineering and scientiic problems described by matematical models oten requires solving dierential equations. Dierential

More information

Physically Based Modeling: Principles and Practice Implicit Methods for Differential Equations

Physically Based Modeling: Principles and Practice Implicit Methods for Differential Equations Pysically Based Modeling: Principles and Practice Implicit Metods for Differential Equations David Baraff Robotics Institute Carnegie Mellon University Please note: Tis document is 997 by David Baraff

More information

An L p di erentiable non-di erentiable function

An L p di erentiable non-di erentiable function An L di erentiable non-di erentiable function J. Marsall As Abstract. Tere is a a set E of ositive Lebesgue measure and a function nowere di erentiable on E wic is di erentible in te L sense for every

More information

Simple Estimators for Semiparametric Multinomial Choice Models

Simple Estimators for Semiparametric Multinomial Choice Models Simple Estimators for Semiparametric Multinomial Choice Models James L. Powell and Paul A. Ruud University of California, Berkeley March 2008 Preliminary and Incomplete Comments Welcome Abstract This paper

More information

Solving Continuous Linear Least-Squares Problems by Iterated Projection

Solving Continuous Linear Least-Squares Problems by Iterated Projection Solving Continuous Linear Least-Squares Problems by Iterated Projection by Ral Juengling Department o Computer Science, Portland State University PO Box 75 Portland, OR 977 USA Email: juenglin@cs.pdx.edu

More information

3.1 Extreme Values of a Function

3.1 Extreme Values of a Function .1 Etreme Values of a Function Section.1 Notes Page 1 One application of te derivative is finding minimum and maimum values off a grap. In precalculus we were only able to do tis wit quadratics by find

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information

1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 -

1 + t5 dt with respect to x. du = 2. dg du = f(u). du dx. dg dx = dg. du du. dg du. dx = 4x3. - page 1 - Eercise. Find te derivative of g( 3 + t5 dt wit respect to. Solution: Te integrand is f(t + t 5. By FTC, f( + 5. Eercise. Find te derivative of e t2 dt wit respect to. Solution: Te integrand is f(t e t2.

More information

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0

Math 212-Lecture 9. For a single-variable function z = f(x), the derivative is f (x) = lim h 0 3.4: Partial Derivatives Definition Mat 22-Lecture 9 For a single-variable function z = f(x), te derivative is f (x) = lim 0 f(x+) f(x). For a function z = f(x, y) of two variables, to define te derivatives,

More information

5.1 introduction problem : Given a function f(x), find a polynomial approximation p n (x).

5.1 introduction problem : Given a function f(x), find a polynomial approximation p n (x). capter 5 : polynomial approximation and interpolation 5 introduction problem : Given a function f(x), find a polynomial approximation p n (x) Z b Z application : f(x)dx b p n(x)dx, a a one solution : Te

More information

MVT and Rolle s Theorem

MVT and Rolle s Theorem AP Calculus CHAPTER 4 WORKSHEET APPLICATIONS OF DIFFERENTIATION MVT and Rolle s Teorem Name Seat # Date UNLESS INDICATED, DO NOT USE YOUR CALCULATOR FOR ANY OF THESE QUESTIONS In problems 1 and, state

More information

Applications of the van Trees inequality to non-parametric estimation.

Applications of the van Trees inequality to non-parametric estimation. Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc

More information

Math 31A Discussion Notes Week 4 October 20 and October 22, 2015

Math 31A Discussion Notes Week 4 October 20 and October 22, 2015 Mat 3A Discussion Notes Week 4 October 20 and October 22, 205 To prepare for te first midterm, we ll spend tis week working eamples resembling te various problems you ve seen so far tis term. In tese notes

More information

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES

SECTION 1.10: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES (Section.0: Difference Quotients).0. SECTION.0: DIFFERENCE QUOTIENTS LEARNING OBJECTIVES Define average rate of cange (and average velocity) algebraically and grapically. Be able to identify, construct,

More information

Polynomials 3: Powers of x 0 + h

Polynomials 3: Powers of x 0 + h near small binomial Capter 17 Polynomials 3: Powers of + Wile it is easy to compute wit powers of a counting-numerator, it is a lot more difficult to compute wit powers of a decimal-numerator. EXAMPLE

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

Bandwidth Selection in Nonparametric Kernel Testing

Bandwidth Selection in Nonparametric Kernel Testing Te University of Adelaide Scool of Economics Researc Paper No. 2009-0 January 2009 Bandwidt Selection in Nonparametric ernel Testing Jiti Gao and Irene Gijbels Bandwidt Selection in Nonparametric ernel

More information

Smoothness Adaptive Average Derivative Estimation ±

Smoothness Adaptive Average Derivative Estimation ± Smootness Adaptive Average Derivative Estimation ± Marcia M.A. Scafgans Victoria Zinde-Walsyz Te Suntory Centre Suntory and Toyota International Centres for Economics and Related Disciplines London Scool

More information

Simple and Powerful GMM Over-identi cation Tests with Accurate Size

Simple and Powerful GMM Over-identi cation Tests with Accurate Size Simple and Powerful GMM Over-identi cation ests wit Accurate Size Yixiao Sun and Min Seong Kim Department of Economics, University of California, San Diego is version: August, 2 Abstract e paper provides

More information

Uniform Convergence Rates for Nonparametric Estimation

Uniform Convergence Rates for Nonparametric Estimation Uniform Convergence Rates for Nonparametric Estimation Bruce E. Hansen University of Wisconsin www.ssc.wisc.edu/~bansen October 2004 Preliminary and Incomplete Abstract Tis paper presents a set of rate

More information

Lab 6 Derivatives and Mutant Bacteria

Lab 6 Derivatives and Mutant Bacteria Lab 6 Derivatives and Mutant Bacteria Date: September 27, 20 Assignment Due Date: October 4, 20 Goal: In tis lab you will furter explore te concept of a derivative using R. You will use your knowledge

More information

THE IMPLICIT FUNCTION THEOREM

THE IMPLICIT FUNCTION THEOREM THE IMPLICIT FUNCTION THEOREM ALEXANDRU ALEMAN 1. Motivation and statement We want to understand a general situation wic occurs in almost any area wic uses matematics. Suppose we are given number of equations

More information

2.8 The Derivative as a Function

2.8 The Derivative as a Function .8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

, meant to remind us of the definition of f (x) as the limit of difference quotients: = lim

, meant to remind us of the definition of f (x) as the limit of difference quotients: = lim Mat 132 Differentiation Formulas Stewart 2.3 So far, we ave seen ow various real-world problems rate of cange and geometric problems tangent lines lead to derivatives. In tis section, we will see ow to

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition an Cain Rules James K. Peterson Department of Biological Sciences an Department of Matematical Sciences Clemson University November 2, 2018 Outline Function Composition an Continuity

More information

A Simple Matching Method for Estimating Sample Selection Models Using Experimental Data

A Simple Matching Method for Estimating Sample Selection Models Using Experimental Data ANNALS OF ECONOMICS AND FINANCE 6, 155 167 (2005) A Simple Matcing Metod for Estimating Sample Selection Models Using Experimental Data Songnian Cen Te Hong Kong University of Science and Tecnology and

More information

University Mathematics 2

University Mathematics 2 University Matematics 2 1 Differentiability In tis section, we discuss te differentiability of functions. Definition 1.1 Differentiable function). Let f) be a function. We say tat f is differentiable at

More information

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if

Recall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if Computational Aspects of its. Keeping te simple simple. Recall by elementary functions we mean :Polynomials (including linear and quadratic equations) Eponentials Logaritms Trig Functions Rational Functions

More information

1 Solutions to the in class part

1 Solutions to the in class part NAME: Solutions to te in class part. Te grap of a function f is given. Calculus wit Analytic Geometry I Exam, Friday, August 30, 0 SOLUTIONS (a) State te value of f(). (b) Estimate te value of f( ). (c)

More information

Pre-Calculus Review Preemptive Strike

Pre-Calculus Review Preemptive Strike Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly

More information

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a?

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a? Solutions to Test 1 Fall 016 1pt 1. Te grap of a function f(x) is sown at rigt below. Part I. State te value of eac limit. If a limit is infinite, state weter it is or. If a limit does not exist (but is

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

MATH1131/1141 Calculus Test S1 v8a

MATH1131/1141 Calculus Test S1 v8a MATH/ Calculus Test 8 S v8a October, 7 Tese solutions were written by Joann Blanco, typed by Brendan Trin and edited by Mattew Yan and Henderson Ko Please be etical wit tis resource It is for te use of

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

Lecture 10: Carnot theorem

Lecture 10: Carnot theorem ecture 0: Carnot teorem Feb 7, 005 Equivalence of Kelvin and Clausius formulations ast time we learned tat te Second aw can be formulated in two ways. e Kelvin formulation: No process is possible wose

More information

Lesson 6: The Derivative

Lesson 6: The Derivative Lesson 6: Te Derivative Def. A difference quotient for a function as te form f(x + ) f(x) (x + ) x f(x + x) f(x) (x + x) x f(a + ) f(a) (a + ) a Notice tat a difference quotient always as te form of cange

More information

These errors are made from replacing an infinite process by finite one.

These errors are made from replacing an infinite process by finite one. Introduction :- Tis course examines problems tat can be solved by metods of approximation, tecniques we call numerical metods. We begin by considering some of te matematical and computational topics tat

More information

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY

SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY (Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative

More information

Logistic Kernel Estimator and Bandwidth Selection. for Density Function

Logistic Kernel Estimator and Bandwidth Selection. for Density Function International Journal of Contemporary Matematical Sciences Vol. 13, 2018, no. 6, 279-286 HIKARI Ltd, www.m-ikari.com ttps://doi.org/10.12988/ijcms.2018.81133 Logistic Kernel Estimator and Bandwidt Selection

More information

Dynamics and Relativity

Dynamics and Relativity Dynamics and Relativity Stepen Siklos Lent term 2011 Hand-outs and examples seets, wic I will give out in lectures, are available from my web site www.damtp.cam.ac.uk/user/stcs/dynamics.tml Lecture notes,

More information

Boosting Kernel Density Estimates: a Bias Reduction. Technique?

Boosting Kernel Density Estimates: a Bias Reduction. Technique? Boosting Kernel Density Estimates: a Bias Reduction Tecnique? Marco Di Marzio Dipartimento di Metodi Quantitativi e Teoria Economica, Università di Cieti-Pescara, Viale Pindaro 42, 65127 Pescara, Italy

More information

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk

Bob Brown Math 251 Calculus 1 Chapter 3, Section 1 Completed 1 CCBC Dundalk Bob Brown Mat 251 Calculus 1 Capter 3, Section 1 Completed 1 Te Tangent Line Problem Te idea of a tangent line first arises in geometry in te context of a circle. But before we jump into a discussion of

More information

Chapter 1D - Rational Expressions

Chapter 1D - Rational Expressions - Capter 1D Capter 1D - Rational Expressions Definition of a Rational Expression A rational expression is te quotient of two polynomials. (Recall: A function px is a polynomial in x of degree n, if tere

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

Robotic manipulation project

Robotic manipulation project Robotic manipulation project Bin Nguyen December 5, 2006 Abstract Tis is te draft report for Robotic Manipulation s class project. Te cosen project aims to understand and implement Kevin Egan s non-convex

More information

Economics 620, Lecture 18: Nonlinear Models

Economics 620, Lecture 18: Nonlinear Models Economics 620, Lecture 18: Nonlinear Models Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 18: Nonlinear Models 1 / 18 The basic point is that smooth nonlinear

More information

Introduction to Derivatives

Introduction to Derivatives Introduction to Derivatives 5-Minute Review: Instantaneous Rates and Tangent Slope Recall te analogy tat we developed earlier First we saw tat te secant slope of te line troug te two points (a, f (a))

More information