Boosting local quasi-likelihood estimators
|
|
- Walter Ramsey
- 5 years ago
- Views:
Transcription
1 Ann Inst Stat Mat (00) 6:5 48 DOI 0.007/s Boosting local quasi-likeliood estimators Masao Ueki Kaoru Fueda Received: Marc 007 / Revised: 8 February 008 / Publised online: 5 April 008 Te Institute of Statistical Matematics, Tokyo 008 Abstract For likeliood-based regression contets, including generalized linear models, tis paper presents a boosting algoritm for local constant quasi-likeliood estimators. Its advantages are te following: (a) te one-boosted estimator reduces bias in local constant quasi-likeliood estimators witout increasing te order of te variance, (b) te boosting algoritm requires only one-dimensional maimization at eac boosting step and (c) te resulting estimators can be written eplicitly and simply in some practical cases. Keywords Bias reduction L Boosting Generalized linear models Kernel regression Local quasi-likeliood Nadaraya Watson estimator Introduction Tis paper deals wit likeliood-based regression problems for wic generalized linear models are typically used. However, te effectiveness of generalized linear models are limited because of teir restricted fleibility. In te case, it is better to use some nonparametric approac suc as kernel regression (Wand and Jones 995; Fan and Gijbels 996). Fan et al. (995) etended te local constant and local polynomial regression estimators to quasi-likeliood metods, wic is an etension of generalized linear models (see Sect..). Loader (999) recommends te local quadratic fit tat as te bias of O( 4 ) and te variance of O{(n) }, were is te M. Ueki (B) K. Fueda Graduate Scool of Environmental Science, Okayama University, Naka --, Tsusima, Okayama , Japan ueki@ems.okayama-u.ac.jp K. Fueda fueda@ems.okayama-u.ac.jp
2 6 M. Ueki, K. Fueda bandwidt, from a practical viewpoint. However, te local polynomial regression estimators require etensive computations because tey rely on numerical maimization at eac evaluated point. Fan (999) overcomes tat problem by introducing one-step local quasi-likeliood estimators, altoug some efforts at implementation are needed. If one uses te local constant fit, suc difficulties in bot computation and implementation do not occur because it can be written eplicitly and simply. However, te bias of te local constant fit is O( ) wic is often not negligible, wile te variance is O{(n) }. We consequently take a course of not using local polynomials but applying a boosting algoritm to te local constant fit to reduce te order of te bias, were te boosting is a recently investigated statistical metodology (Scapire 990; Freund 995; Freund and Scapire 996; Friedman 00; Bülmann and Yu 00; Marzio and Taylor 004a).Marzio and Taylor (004b) proposed te boosting for Nadaraya Watson estimator, wic is te local constant fit in Gaussian model, were te algoritm tey applied is te L Boosting of Friedman (00) and Bülmann and Yu (00). Te bias of te Nadaraya Watson estimator is O( ) and teir one-boosted estimator reduces te bias to O( 4 ). Tis type of bias reduction is eamined by many autors (Jones et al. 995; Coi and Hall 998; Marzio and Taylor 004a). Te advantages of our algoritm are te following: (a) te one-boosted estimator reduces te bias of O( ) to O( 4 ) witout increasing te order of te variance, (b) our algoritm requires only one-dimensional maimization at eac boosting step wile te local polynomials need multi-dimensional maimization and (c) te resulting estimators can be written eplicitly and simply in some practical cases. Our approac is also te simplest among te bias reduction tecniques. Boosting local constant quasi-likeliood estimators. Local constant quasi-likeliood estimators Tis section describes local constant quasi-likeliood estimators. Let (X, Y ),..., (X n, Y n ) be a set of independent random pairs were, for eac i, Y i is a scalar response variable and X i is an R d -valued vector of covariates aving density f wit support supp( f ) R d.let(x, Y ) denote a generic member of te sample, and let m() = E(Y X = ). Wen te range of m() is restricted on an interval I of R in likeliood based problems, wit generalized linear models, suc as Bernoulli, Poisson and gamma, te estimation is suitable for η() = g{m()} instead of for m() were g is a one-to-one function from I to R, called link function. Te quasi-likeliood metod is an etension of te generalized linear models. Te former requires only te specification of a relationsip between te mean and variance of Y ; it is useful even if te likeliood function is not available. Te former metod maimizes te quasi-likeliood function Q{m(), y} instead of te log-likeliood function. Tis paper eplains only te case in wic te conditional variance is modeled as var(y X = ) = V {m()} for some known positive function V, and te corresponding quasi-likeliood function Q(m, y) satisfies y m Q(m, y) = m V (m). ()
3 Boosting local quasi-likeliood estimators 7 Te quasi-score () possesses properties tat resemble tose of te usual likeliood score function: one of te properties is tat it satisfies te first two moment conditions of Bartlett s identities (Fan and Gijbels 996). Te likeliood score of one-parameter eponential family is a special case of ()(Fan et al. 995). For simplicity, we deal wit scalar covariates X,...,X n. Te local constant quasilikeliood estimator for m() can be written eplicitly as ˆm 0 (; ) = g {ˆη 0 (; )} = ni= K (X i )Y i ni= K (X i ), () wic is given by maimizing n i= Q{g (η), Y i }K (X i ) wit respect to η, were K (z) = K (z/)/, K (z) is a symmetric unimodal probability density called kernel function, and > 0 is a parameter called bandwidt, wic controls te etent of smooting. Te estimator () is simple, but it performs poorly. In te net section, we strengten () using boosting.. Te boosting algoritm In L boosting, a simple base estimator, called weak learner, is used iteratively in least-squares fitting wit stage-wise updating of current residuals. In tis section, before proposing te boosting local quasi-likeliood estimators, we describe te L boosting algoritm proposed by Marzio and Taylor (004b), were te weak learner is te Nadaraya-Watson estimator, wic corresponds to () wen te link function g is te identity. Te algoritm is given as follows. Algoritm Step (initialization) Let ˆm 0 be te Nadaraya-Watson estimator wit a previously cosen > 0. Step (iteration) Repeat for b = 0,...,B, (i) (ii) Compute n estimates ˆm b (X i ),i =,...,n. Update ˆm b+ () = ˆm b ()+ˆδ(), were ˆδ() is te Nadaraya-Watson estimator in wic te response variables Y i are replaced by te current residuals U i = Y i ˆm b (X i ), i.e., ˆδ() = ni= K (X i )U i ni= K (X i ). () Least-squares fitting can be viewed as an optimization in te Gaussian regression model. Tis consideration enables us to generalize te L boosting to tat in a quasilikeliood framework. Here, we must take into account tat additivity in step (ii) does not necessarily old in tis framework. To acieve te generalization, we rewrite ()as ˆδ() = argma δ [Y i {ˆm b (X i ) + δ}] K (X i ). (4) i=
4 8 M. Ueki, K. Fueda Based on te form of(4), we generalize Algoritm for local constant quasi-likeliood estimators () asfollows. Algoritm Step (initialization) Let ˆη 0 be () wit a previously cosen > 0. Step (iteration) Repeat for b = 0,...,B, (i) Compute n estimates ˆη b (X i ),i =,...,n. (ii) Update ˆη b+ () =ˆη b () + ˆδ(), were ˆδ() = argma δ R Q[g {ˆη b (X i ) + δ}, Y i ]K (X i ). (5) i= We can obtain te estimator for m() by ˆm b+ () = g {ˆη b+ ()}. Note tat ˆδ() is added in η s space for range preservation, and (5) requires scalar maimization only, even for multiple covariates. Furtermore, some cases eist for wic te resulting estimator can be written eplicitly and simply as follows. Eample (Gaussian model wit identity link) Tis eample corresponds to Algoritm. Te quasi-likeliood function ten coincides wit te usual log-likeliood function of te Gaussian distribution wit mean m and variance unity: Q(m, y) = {(y m) +log(π)}/; V (m) =. Te link function g is te identity, η = g(m) = m. Attebt stage, ˆm b+ () = ˆm b () + ˆδ(), were ˆδ() is given in (). Eample (Poisson model wit log link) Te link function g is log link, η = g(m) = log m. Te quasi-likeliood function ten coincides wit te usual log-likeliood function of te Poisson distribution wit mean m: Q(m, y) = m + y log m + log y!; V (m) = m. Attebt stage, ˆη b+ () =ˆη b () + ˆδ(), were ep{ ˆδ()} = ni= K (X i )Y i ni= K (X i ) ep{ˆη b (X i )}. Eample (gamma model wit log link) Te link function g is log link, η = g(m) = log m. Te quasi-likeliood function ten coincides wit te usual log-likeliood function of te gamma density wit mean m and sape parameter α: Q(m, y) = αy/m α log m + (α ) log y + α log α log Ɣ(α), were Ɣ( ) is te gamma function; V (m) = m /α. Attebt stage, ˆη b+ () =ˆη b () + ˆδ(), were ep{ ˆδ()} = ni= K (X i ) ep{ ˆη b (X i )}Y i ni=. K (X i ). Empasizing te updating term Primarily, boosting can be regarded as a sequential greedy optimization of additive models, wic is typical in L boosting. Te updating term in eac step of boosting can be regarded as an estimation using iteratively reweigted data. From tis viewpoint, we specifically eamine te updating term defined in (5).
5 Boosting local quasi-likeliood estimators 9 Second-order Taylor approimation in (5) yields tat ( ) Xi l n (δ) = K Q[g {ˆη b (X i ) + δ, Y i }] i= ( )( Xi K Q[g {ˆη b (X i )}, Y i ]+δq {ˆη b (X i ), Y i }+ ) δ q {ˆη b (X i ), Y i }, i= were q i are defined in te Appendi. Terefore, te updating term is approimated as te following. ( ) ni= K Xi q {ˆη b (X i ), Y i } ˆδ() ( ) n i= K Xi q {ˆη b (X i ), Y i } ni= K (X i )[g {ˆm b (X i )}V {ˆm b (X i )}] {Y i ˆm b (X i )} = n. (6) i= K (X i )q {ˆη b (X i ), Y i } Using (6), te updating term ˆδ() can be interpreted approimately as te reweigted version of te kernel regressor (5), were te response variables Y i are replaced by te current residuals as in () in Algoritm. Tis consideration describes a transparent relationsip between te proposed algoritm and L boosting. Bias reduction property In te following teorem, we state te bias reduction property, i.e., one-boosted estimators ˆη (; ) reduce te bias of O( ) in local constant quasi-likeliood estimators, wic is often not negligible, to O( 4 ). Teorem Suppose tat te conditions presented in te Appendi old. If 0 and n as n, te estimator after one-boosting iteration, ˆη (; ), as bias of O( 4 ) and variance of var{ˆη (; )} =var(y X = ) g {m()} nf () T K (z)dz + o{(n) }, were T K (z) = K (z) K K (z) is te fourt order kernel in Jones et al. (995, Teorem ). In addition, ˆm (; ) = g {ˆη (; )} as te bias of O( 4 ) and te variance of var{ ˆm (; )} =var{ˆη (; )}/g {m()} + o{(n) }. Te proof is given in te Appendi. According to Fan et al. (995), te bias and variance of te local constant quasi-likeliood estimators, i.e., te non-boosted estimators, are O( ) and O{(n) }, respectively, wic in turn implies tat bias reduction is acieved witout increasing te order of te variance.
6 40 M. Ueki, K. Fueda 4 Numerical illustrations Tis section provides some numerical illustrations in Poisson (eample ) and eponential (Eample for α = ) models. We use te Epanecnikov kernel K (u) = 4 ( u ) { <u<}, were { } is te indicator function. Te eamined conditional means are m p () = ep{cos(π )}, m p () = arcsin +, for Poisson, m e () = 8ep( ), m e () = ( + ) / + 4, for eponential, and te design density f () is te uniform density on [, ]. To measure te performance of resulting estimator ˆm(), we use te square root of average square [ errors, RASE = j= {m( j) ˆm( j )} ] /,forj = + ( j )/99, j =,...,00, at wic te function m() is estimated. Te sample size n is 00 trougout. To sow ow te proposed algoritm works, we demonstrate te beaviors for one random sample in Fig. (Poisson) and Fig. (eponential) were ˆm b () are plotted for b = 0 (das), (dot), (dot das) and (long das), togeter wit te true curve (solid). Te bandwidt used in te left and rigt panels are optimal, respectively, for b = 0 and b =, wic are founded numerically wit respect to te RASE. It seems tat ˆm () in te rigt panels fit more appropriately to te true curves tan te ˆm 0 () in te left panels: te optimal boosted estimators are better tan te optimal non-boosted ones. (a) (c) (b) (d) Fig. in a Poisson case: for m p (), a =0.7, b =0.4; for m p (), c =0.96, d =.67
7 Boosting local quasi-likeliood estimators 4 (a) (c) (b) (d) Fig. in an eponential case: for m e (), a = 0.54, b = 0.99; for m e (), (c) = 0.97, d =.6 We eamine te boosting for various, and repeat te procedures 500 times to illustrate te efficiency. Figure sows te average RASEs against, were te plotted numbers 0 indicate te corresponding boosting iterations (te 0 corresponds to non-boosted estimator, i.e., local constant quasi-likeliood estimator). All figures suggest tat boosting works well for appropriate because eac minimum RASE of te boosted estimate is smaller tan tat of non-boosted estimate. Note tat wic minimizes te RASE tends to increase as te number of boosting iterations grows. Tis penomenon is identical to tat observed in Marzio and Taylor (004a) for boosting kernel density estimators. Terefore, we recommend to take somewat larger tan te optimal one for non-boosted estimators as te strategy to select. Net, we verify te implication in Teorem related to te mean squared error (MSE). Table compares teoretical MSE epressions given in Teorem and simulated true MSEs in 000 eperiments, were bot te non-boosted and one-boosted estimators, ˆm 0 () and ˆm () are evaluated at tree points = 0., 0, 0.6. Te results sow tat te asymptotic MSE epressions given in Teorem approimate te true MSEs well. In practice, te bandwidt must be estimated from te data. One way of coosing is to use likeliood based cross-validation. Te metod is useful wen te form of Q(m, y) is known, as in generalized linear models. Te bandwidt ĥ selected by te cross-validation is te maimizing Q{ ˆm i (X i ), Y i }, i=
8 4 M. Ueki, K. Fueda (a) RASE (b) RASE (c) RASE (d) RASE Fig. Average RASE plots: a for m p (), b for m p (), c for m e () and d for m e (). Te plotted numbers 0 indicate te corresponding boosting iterations Table Teoretical MSE epressions and simulated MSEs for non-boosted and one-boosted estimators, ˆm 0 () and ˆm () T/S B m p ( = 0.) m p ( =.) m e ( = 0.8) m e ( =.7) T , 0.68, , 0.040, ,.0, , 0.9, 0.44 S , 0.94, , 0.05, ,.65, , 0.49, 0.98 T 0.06, 0., , 0.08, ,.6, , 0.49, 0.6 S 0.08, 0.66, , 0.05, ,., , 0.49, 0.90 In te table, eac MSE evaluated at = 0., 0, 0.6 is described in te order corresponding to tat of. T and S denote Teoretical and Simulated values, respectively. B means te boosting iteration number were ˆm i ( ) corresponds to te version of ˆm( ) tat is constructed by eliminating it data (X i, Y i ). Table sows te average RASEs in 500 simulation eperiments for respective boosting iteration numbers 0, wit bandwidts selected using crossvalidation. Te bandwidts selected ere are cosen among finite candidates, wic consist of 50 equi-spaced points on te intervals given in te second line of Table. Tese intervals are determined empirically according to te variability of ĥ. By te results in Table, we ascertained tat te boosted estimation, at least once, works better tan non-boosted estimation, even if te bandwidts are estimated using crossvalidation. Consequently, it is wortwile to apply te boosting algoritm in practical situations.
9 Boosting local quasi-likeliood estimators 4 Table Average RASE for eac boosting number, wit bandwidt selected by cross-validation B m p m p m e m e [0.,.5] [0.,] [0.,.] [0.,4.5] B means te boosting iteration number. Te candidate bandwidts consist of 50 equi-spaced points on te intervals given in te second line of te table 5 Concluding remarks We propose a boosting algoritm for local constant quasi-likeliood estimators tat provides bias reduction. Te metod is valid in bot computation and implementation. Tere are still some issues. Te first is te selection of. A reasonable solution in generalized linear models is to use likeliood-based cross-validation. In some cases in wic te resulting estimators are given eplicitly, te required computations for crossvalidation are few. However, in oter cases in wic numerical maimizations are required, including logistic regression, te required computations could be epensive. For tis reason, better selection criteria are needed. Te second is ow to stop te boosting iteration. In our eaminations, te two-boosted and tree-boosted estimators work better tan te one-boosted estimators. However, as Bülmann and Yu (00) pointed out, many boosting iterations cause overfitting. To avoid tis, we ave to stop te iteration based on a stopping rule suc as cross-validation. Te tird is to analyze te two-boosted and more-boosted estimators because we ave only justified te one-boosted estimators in tis paper. Acknowledgments paper considerably. Te autors would like to tank te referees for elpful suggestions tat improve te Appendi: Proof of Teorem Preliminary Let q i (η, y) = ( i / η i )Q{g (η), y} for i =,. Since Q satisfies (), q i is linear in y for fied, q {η(), m()} =0 and q {η(), m()} = ρ(), were ρ() =[g {m()} V {m()}].alsoletσ () = var(y X = ). We present te conditions: (i) Te function q (η, y) < 0forη R and y in te range of te response variable; (ii) Te functions f (4),η (4),σ, V and g (4) are continuous; (iii) For eac supp( f ), ρ(), σ () and g {m()} are nonzero; (iv) Te kernel K is a symmetric probability density wit support [, ]; (v) is an interior point of supp( f ). Furtermore, we assume tat n /9, wic is te optimal rate tat minimizes te asymptotic MSE of order O{ 8 + (n) }. See te argument below Teorem of Jones et al. (995). We also write ( fρ)() = f ()ρ() and (mf)() = m() f ().
10 44 M. Ueki, K. Fueda Let δ = an δ, ˆη i() =ˆη i (; ) for i = 0,, ˆm 0 () = ˆm 0 (; ) and l n (δ ) = ( Xi K i= ) ( Q[g {ˆη 0 (X i ) + a n δ }, Y i ] Q[g {ˆη 0 (X i )}, Y i ]), were a n = (n) /. Condition (i) implies tat l n is concave in δ.letˆδ be te maimizer of l n (δ ), ten ˆδ = ( fρ)() W n + o p () were W n = a n ( Xi K i= ) q {ˆη 0 (X i ), Y i }. (7) Te derivation of (7) is as follows. Using Taylor epansion, l n (δ ) = W n δ + A nδ + a n 6 ( Xi K i= ) q (η i, Y i )δ, (8) were η i is between ˆη 0 (X i ) and ˆη 0 (X i ) + a n δ, and A n = an ni= K q {ˆη 0 (X i ), Y i }.By ˆη 0 () = η() + o p (), ( ) Xi ( E(A n ) = X E[K ) q {η(x ), m(x )}] + o() = ( fρ)() + o() and var(a n ) = O(a n );usinga n = E(A n ) + O p {var(a n ) / },weavea n = ( fρ)() + o p (). A similar argument in Fan and Gijbels (996, p. ) sows tat te last term in (8) is bounded by O p (a n ). Terefore, l n (δ ) = W n δ ( fρ)()δ + o p (). Using te quadratic approimation lemma (Fan and Gijbels 996, p. 0), we obtain (7). Bias First, we derive te bias. Let µ = z K (z)dz.using n ni= K (X i ) = f () + µ f () + O p ( 4 ) by conditions (ii), (iv) and (n) / n 4/9 4, it follows from () tat g {ˆη 0 ()} = ˆm 0 () = nf() j= { K (X j )Y j f } () µ + O p ( 4 ). f () (9)
11 Boosting local quasi-likeliood estimators 45 Using q {η(), y} =g {m()}ρ()[y g {η()}],(7) is rewritten as ˆδ = a n K (X i )ρ(x i )g {m(x i )} ( fρ)() i= { Y i K (X j X i )Y j f } (X i ) nf(x i ) µ + O p (an f (X j =i i ) 4 ). (0) In addition, using conditions (ii), (iv) and K (v u)(mf)(v)dv = (mf)(u) + µ (mf) (u) + O( 4 ), E(ˆδ ) = a nn ( fρ)(u)k (u )g {m(u)} ( fρ)() [ m(u) { K (v u)(mf)(v)dv f }] (u) f (u) µ f (u) du + O(an 4 ) = a n ( fρ)(u)k (u )g {m(u)} µ ( fρ)() { (mf) } (u) (mf )(u) du + O(an f (u) f (u) 4 ) = an g {m()} { f () µ (mf) () (mf )() } + O(an 4 ). () According to Fan et al. (995), te bias of ˆη 0 () is given as E{ˆη 0 ()} η() = g {m()} f () µ {(mf) () (mf )()}+O( 4 ). () Combining () and (), we can sow tat te bias of ˆη () =ˆη 0 ()+ ˆδ() is O( 4 ). Variance Secondly, we derive te variance. Define ˆη i () = a n [ˆη i() E{ˆη i () X}] for i = 0,, were E{ X} is te conditional epectation under given X,...,X n. Ten, it olds tat var{ˆη ()} =an E{ˆη () }+E ( [E{ˆη () X} η()] ) [E{ˆη ()} η()], were te tird term, te squared bias, is {O( 4 )}. To calculate te second term, we first note, using te Taylor epansion for (9), tat ˆη 0 () η() = g {m()} nf() j= { [K (X j )Y j f } ] () µ (mf)() +O p ( 4 ). f () ()
12 46 M. Ueki, K. Fueda From (0) and (), E{ˆη () X} η() = E{ˆη 0 () η() X}+E{ˆδ() X} =D + O p ( 4 ), (4) were D = n ni= R i + n ni = j S ij, R i = R(X i ), S ij = S(X i, X j ), R(X i ) = g {m()} f () [ { K (X i )m(x i ) f } ] () µ (mf)() f () + ( fρ)() K (X i )ρ(x i )g {m(x i )}m(x i ), S(X i, X j ) = ( fρ)() K (X i ) ρ(x i)g {m(x i )} K (X j X i )m(x j ) f (X i ) { f } (X i ) µ. f (X i ) Note tat E(D) equals te bias of ˆη () wit error O( 4 ), i.e., E(D) = O( 4 ). Observing tat E (R ) and E (S ) are of order O(), E(D ) = E n R i R j + n R i S jk + n 4 S ij S kl i, j i j =k i = j k =l ={E (R )} + E (R )E (S ) +{E (S )} + O(n ) ={E (R + S )} + O(n ) ={E(D)} + O(n ) = O( 8 ), were E and E represent epectations wit respect to X and (X, X ), respectively. Terefore, te second term is also of order O( 8 ). It is, after all, sufficient to calculate E{ˆη () }.By() and (na n ) = a n, ˆη 0 () = a n g {m()} f () K (X j )Ỹ j + O p ( ), (5) j= in wic Ỹ i = Y i m(x i ). On te oter and, defining G r,n = a n n i= K (X i ) {Ỹi nf(x } i ) j =i K (X j X i )Ỹ j (X i ) r for r =0, and ξ()=ρ()g {m()}, it follows from Taylor epanding ξ(x i ) around in (0), wit condition (ii), tat ˆδ E(ˆδ X) = ( fρ)() {G 0,nξ() + G,n ξ ()}+O p ( ) = g {m()} G 0,n + o p (). (6) f ()
13 Boosting local quasi-likeliood estimators 47 Te second equality follows from G,n = o p (), wic we sow in wat follows. Observing tat E(Ỹ i Ỹ j ) = 0ifi = j and = σ (w) f (w)dw oterwise, we ave (a n ) E(G r,n ) [ = E i,k= + n f (X i ) f (X k ) { K (X i )K (X k ) Ỹ i Ỹ k j =i,l =k = I I + I + o(i I + I ), nf(x i ) K (X j X i )Ỹ j Ỹ k j =i K (X j X i )K (X l X k )Ỹ j Ỹ l } (X i ) r (X k ) r ] were I = n I = n I = n K (w )( f σ )(w)(w ) r dw, K (w ) K (u )K (u w)( f σ )(u)(u ) r du(w ) r dw, K (u ) K (v ) K (w u)k (w v)( f σ )(w) dw(u ) r (v ) r dudv. Ten, I = n r z r K (z)( f σ )( + z)dz = O(an r ) and I = n r = n r K (z) K (z) ( u K ) ( ) u K z ( f σ )(u) ( ) u r duz r dz K (s)k (s z)( f σ )( + s)s r dsz r dz = O(a n r ). Similarly, I = n r 4 = n r K (s) = O(a n r ). K (s) K (t) K (t) ( w K ) ( ) w s K t ( f σ )(w)dw sdstdt K (z s)k (z t)( f σ )( + z)dzsdstdt Tus, we deduce tat E(Gr,n ) = O(r ). Noting tat E(G r,n ) = 0, var(g r,n ) = E(Gr,n ). Tis implies tat G r,n = O p ( r ) = O p ( r ), in particular, G,n = o p (), tereby yielding (6).
14 48 M. Ueki, K. Fueda Combining (5) and (6), we can write ˆη () = g {m()} Z n + o p (), (7) f () were Z n = a n { n i= K (X i ) Ỹ i nf(x i ) j =i K (X j X i )Ỹ j }; consequently, E{ˆη [ ] () }= g {m()} f () E(Z n ) + o(). Te same arguments as in deriving (6) apply to te calculation of E(Zn ), wic derives te variance. Bias and variance of ˆm () Te assertion regarding te estimator for m() is straigtforwardly obtained by noting ˆm () = g {ˆη ()} and using te same process as tat of te proof of Teorem in Fan et al. (995). References Bülmann, P., Yu, B. (00). Boosting wit te L loss: regression and classification. Journal of te American Statistical Association, 98, 4 9. Coi, E., Hall, P. (998). On bias reduction in local linear smooting. Biometrika, 85, 45. Fan, J. (999). One-step local quasi-likeliood estimation. Journal of te Royal Statistical Society, Ser. B, 6, Fan, J., Gijbels, I. (996). Local polynomial modelling and its applications. London: Capman and Hall. Fan, J., Heckman, N. E., Wand, M. P. (995). Local polynomial kernel regression for generalized linear models and quasi-likeliood functions. Journal of te American Statistical Association, 90, Freund, Y. (995). Boosting a weak learning algoritm by majority. Information and Computation,, Freund, Y., Scapire, R. E. (996). Eperiments wit a new boosting algoritm. In: Saitta, L. (Ed.) Macine Learning: Proceedings of te Tirteent International Conference, (pp ). San Francisco: Morgan Kauffman. Friedman, J. (00). Greedy function approimation: a gradient boosting macine. Te Annals of Statistics, 9, 89. Jones, M. C., Linton, O. and Nielsen, J. (995). A simple bias reduction metod for density estimation. Biometrika, 8, 7 8. Loader, C. R. (999). Local regression and likeliood. New York: Springer. Marzio, M. D., Taylor, C. C. (004a). Boosting kernel density estimates: A bias reduction tecnique? Biometrika, 9, 6. Marzio, M. D., Taylor, C. C. (004b). Multistep kernel regression smooting by boosting. leeds.ac.uk/~carles/boostreg.pdf, unpublised manuscript. Scapire, R. E. (990). Te strengt of weak learnability. Macine Learning, 5,. Wand, M. P., Jones, M. C. (995). Kernel smooting. London: Capman and Hall.
Boosting Kernel Density Estimates: a Bias Reduction. Technique?
Boosting Kernel Density Estimates: a Bias Reduction Tecnique? Marco Di Marzio Dipartimento di Metodi Quantitativi e Teoria Economica, Università di Cieti-Pescara, Viale Pindaro 42, 65127 Pescara, Italy
More informationThe Priestley-Chao Estimator
Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are
More informationNADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i
NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u
More informationUniform Convergence Rates for Nonparametric Estimation
Uniform Convergence Rates for Nonparametric Estimation Bruce E. Hansen University of Wisconsin www.ssc.wisc.edu/~bansen October 2004 Preliminary and Incomplete Abstract Tis paper presents a set of rate
More informationChapter 1. Density Estimation
Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f
More informationBootstrap confidence intervals in nonparametric regression without an additive model
Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap
More informationKernel Density Based Linear Regression Estimate
Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency
More informationJournal of Computational and Applied Mathematics
Journal of Computational and Applied Matematics 94 (6) 75 96 Contents lists available at ScienceDirect Journal of Computational and Applied Matematics journal omepage: www.elsevier.com/locate/cam Smootness-Increasing
More information232 Calculus and Structures
3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE
More informationOn Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys
American Journal of Teoretical and Applied Statistics 2018; 7(3): 92-101 ttp://www.sciencepublisinggroup.com/j/ajtas doi: 10.11648/j.ajtas.20180703.11 ISSN: 2326-8999 (Print); ISSN: 2326-9006 (Online)
More information2.8 The Derivative as a Function
.8 Te Derivative as a Function Typically, we can find te derivative of a function f at many points of its domain: Definition. Suppose tat f is a function wic is differentiable at every point of an open
More informationNumerical Differentiation
Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function
More informationMath 31A Discussion Notes Week 4 October 20 and October 22, 2015
Mat 3A Discussion Notes Week 4 October 20 and October 22, 205 To prepare for te first midterm, we ll spend tis week working eamples resembling te various problems you ve seen so far tis term. In tese notes
More informationIntegral Calculus, dealing with areas and volumes, and approximate areas under and between curves.
Calculus can be divided into two ke areas: Differential Calculus dealing wit its, rates of cange, tangents and normals to curves, curve sketcing, and applications to maima and minima problems Integral
More informationPre-Calculus Review Preemptive Strike
Pre-Calculus Review Preemptive Strike Attaced are some notes and one assignment wit tree parts. Tese are due on te day tat we start te pre-calculus review. I strongly suggest reading troug te notes torougly
More informationINFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction
INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc
More informationDerivatives of Exponentials
mat 0 more on derivatives: day 0 Derivatives of Eponentials Recall tat DEFINITION... An eponential function as te form f () =a, were te base is a real number a > 0. Te domain of an eponential function
More informationINTRODUCTION TO CALCULUS LIMITS
Calculus can be divided into two ke areas: INTRODUCTION TO CALCULUS Differential Calculus dealing wit its, rates of cange, tangents and normals to curves, curve sketcing, and applications to maima and
More informationPOLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY
APPLICATIONES MATHEMATICAE 36, (29), pp. 2 Zbigniew Ciesielski (Sopot) Ryszard Zieliński (Warszawa) POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY Abstract. Dvoretzky
More informationFast Exact Univariate Kernel Density Estimation
Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis
More informationDEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE
U N I V E R S I T Ä T H A M B U R G A note on residual-based empirical likeliood kernel density estimation Birte Musal and Natalie Neumeyer Preprint No. 2010-05 May 2010 DEPARTMENT MATHEMATIK SCHWERPUNKT
More informationMANY scientific and engineering problems can be
A Domain Decomposition Metod using Elliptical Arc Artificial Boundary for Exterior Problems Yajun Cen, and Qikui Du Abstract In tis paper, a Diriclet-Neumann alternating metod using elliptical arc artificial
More informationA Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation
A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation Peiua Qiu Scool of Statistics University of Minnesota 313 Ford Hall 224 Curc St SE Minneapolis, MN 55455 Abstract
More informationNew families of estimators and test statistics in log-linear models
Journal of Multivariate Analysis 99 008 1590 1609 www.elsevier.com/locate/jmva ew families of estimators and test statistics in log-linear models irian Martín a,, Leandro Pardo b a Department of Statistics
More information1 The concept of limits (p.217 p.229, p.242 p.249, p.255 p.256) 1.1 Limits Consider the function determined by the formula 3. x since at this point
MA00 Capter 6 Calculus and Basic Linear Algebra I Limits, Continuity and Differentiability Te concept of its (p.7 p.9, p.4 p.49, p.55 p.56). Limits Consider te function determined by te formula f Note
More informationArtificial Neural Network Model Based Estimation of Finite Population Total
International Journal of Science and Researc (IJSR), India Online ISSN: 2319-7064 Artificial Neural Network Model Based Estimation of Finite Population Total Robert Kasisi 1, Romanus O. Odiambo 2, Antony
More informationThe derivative function
Roberto s Notes on Differential Calculus Capter : Definition of derivative Section Te derivative function Wat you need to know already: f is at a point on its grap and ow to compute it. Wat te derivative
More informationEFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS
Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,
More information7 Semiparametric Methods and Partially Linear Regression
7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).
More informationLecture 10: Carnot theorem
ecture 0: Carnot teorem Feb 7, 005 Equivalence of Kelvin and Clausius formulations ast time we learned tat te Second aw can be formulated in two ways. e Kelvin formulation: No process is possible wose
More informationBandwidth Selection in Nonparametric Kernel Testing
Te University of Adelaide Scool of Economics Researc Paper No. 2009-0 January 2009 Bandwidt Selection in Nonparametric ernel Testing Jiti Gao and Irene Gijbels Bandwidt Selection in Nonparametric ernel
More informationParameter Fitted Scheme for Singularly Perturbed Delay Differential Equations
International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department
More informationLecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.
Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative
More informationContinuity and Differentiability of the Trigonometric Functions
[Te basis for te following work will be te definition of te trigonometric functions as ratios of te sides of a triangle inscribed in a circle; in particular, te sine of an angle will be defined to be te
More informationContinuity and Differentiability Worksheet
Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;
More informationMath 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006
Mat 102 TEST CHAPTERS 3 & 4 Solutions & Comments Fall 2006 f(x+) f(x) 10 1. For f(x) = x 2 + 2x 5, find ))))))))) and simplify completely. NOTE: **f(x+) is NOT f(x)+! f(x+) f(x) (x+) 2 + 2(x+) 5 ( x 2
More informationNUMERICAL DIFFERENTIATION. James T. Smith San Francisco State University. In calculus classes, you compute derivatives algebraically: for example,
NUMERICAL DIFFERENTIATION James T Smit San Francisco State University In calculus classes, you compute derivatives algebraically: for example, f( x) = x + x f ( x) = x x Tis tecnique requires your knowing
More informationChapter 2 Limits and Continuity
4 Section. Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 6) Quick Review.. f () ( ) () 4 0. f () 4( ) 4. f () sin sin 0 4. f (). 4 4 4 6. c c c 7. 8. c d d c d d c d c 9. 8 ( )(
More informationLecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines
Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to
More information5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems
5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we
More informationA MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES
A MONTE CARLO ANALYSIS OF THE EFFECTS OF COVARIANCE ON PROPAGATED UNCERTAINTIES Ronald Ainswort Hart Scientific, American Fork UT, USA ABSTRACT Reports of calibration typically provide total combined uncertainties
More informationChapter 2. Limits and Continuity 16( ) 16( 9) = = 001. Section 2.1 Rates of Change and Limits (pp ) Quick Review 2.1
Capter Limits and Continuity Section. Rates of Cange and Limits (pp. 969) Quick Review..... f ( ) ( ) ( ) 0 ( ) f ( ) f ( ) sin π sin π 0 f ( ). < < < 6. < c c < < c 7. < < < < < 8. 9. 0. c < d d < c
More informationEFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING
Statistica Sinica 13(2003), 641-653 EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING J. K. Kim and R. R. Sitter Hankuk University of Foreign Studies and Simon Fraser University Abstract:
More informationHazard Rate Function Estimation Using Erlang Kernel
Pure Matematical Sciences, Vol. 3, 04, no. 4, 4-5 HIKARI Ltd, www.m-ikari.com ttp://dx.doi.org/0.988/pms.04.466 Hazard Rate Function Estimation Using Erlang Kernel Raid B. Sala Department of Matematics
More informationPoisson Equation in Sobolev Spaces
Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on
More information4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.
Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra
More informationPolynomial Interpolation
Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x
More informationModel Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1
Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 By Jiti Gao 2 and Maxwell King 3 Abstract We propose a simultaneous model specification procedure for te conditional
More informationNonparametric density estimation for linear processes with infinite variance
Ann Inst Stat Mat 2009) 61:413 439 DOI 10.1007/s10463-007-0149-x Nonparametric density estimation for linear processes wit infinite variance Tosio Honda Received: 1 February 2006 / Revised: 9 February
More information158 Calculus and Structures
58 Calculus and Structures CHAPTER PROPERTIES OF DERIVATIVES AND DIFFERENTIATION BY THE EASY WAY. Calculus and Structures 59 Copyrigt Capter PROPERTIES OF DERIVATIVES. INTRODUCTION In te last capter you
More informationMATH1151 Calculus Test S1 v2a
MATH5 Calculus Test 8 S va January 8, 5 Tese solutions were written and typed up by Brendan Trin Please be etical wit tis resource It is for te use of MatSOC members, so do not repost it on oter forums
More informationSFU UBC UNBC Uvic Calculus Challenge Examination June 5, 2008, 12:00 15:00
SFU UBC UNBC Uvic Calculus Callenge Eamination June 5, 008, :00 5:00 Host: SIMON FRASER UNIVERSITY First Name: Last Name: Scool: Student signature INSTRUCTIONS Sow all your work Full marks are given only
More informationA = h w (1) Error Analysis Physics 141
Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.
More informationDifferential Calculus (The basics) Prepared by Mr. C. Hull
Differential Calculus Te basics) A : Limits In tis work on limits, we will deal only wit functions i.e. tose relationsips in wic an input variable ) defines a unique output variable y). Wen we work wit
More informationSymmetry Labeling of Molecular Energies
Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry
More informationWEIGHTED KERNEL ESTIMATORS IN NONPARAMETRIC BINOMIAL REGRESSION
WEIGHTED KEREL ESTIMATORS I OPARAMETRIC BIOMIAL REGRESSIO Hidenori OKUMURA and Kanta AITO Department of Business management and information science, Cugoku Junior College, Okayama, 70-097 Japan Department
More informationA Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation
A Locally Adaptive Transformation Metod of Boundary Correction in Kernel Density Estimation R.J. Karunamuni a and T. Alberts b a Department of Matematical and Statistical Sciences University of Alberta,
More information5.1 We will begin this section with the definition of a rational expression. We
Basic Properties and Reducing to Lowest Terms 5.1 We will begin tis section wit te definition of a rational epression. We will ten state te two basic properties associated wit rational epressions and go
More informationEstimation of Population Mean on Recent Occasion under Non-Response in h-occasion Successive Sampling
Journal of Modern Applied Statistical Metods Volume 5 Issue Article --06 Estimation of Population Mean on Recent Occasion under Non-Response in -Occasion Successive Sampling Anup Kumar Sarma Indian Scool
More informationCopyright c 2008 Kevin Long
Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula
More informationConsider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.
Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions
More informationUniversity Mathematics 2
University Matematics 2 1 Differentiability In tis section, we discuss te differentiability of functions. Definition 1.1 Differentiable function). Let f) be a function. We say tat f is differentiable at
More informationFunction Composition and Chain Rules
Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function
More informationBasic Nonparametric Estimation Spring 2002
Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,
More informationch (for some fixed positive number c) reaching c
GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan
More informationFast optimal bandwidth selection for kernel density estimation
Fast optimal bandwidt selection for kernel density estimation Vikas Candrakant Raykar and Ramani Duraiswami Dept of computer science and UMIACS, University of Maryland, CollegePark {vikas,ramani}@csumdedu
More informationRecall from our discussion of continuity in lecture a function is continuous at a point x = a if and only if
Computational Aspects of its. Keeping te simple simple. Recall by elementary functions we mean :Polynomials (including linear and quadratic equations) Eponentials Logaritms Trig Functions Rational Functions
More informationOSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix
Opuscula Mat. 37, no. 6 (2017), 887 898 ttp://dx.doi.org/10.7494/opmat.2017.37.6.887 Opuscula Matematica OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS Sandra
More informationDifferentiation in higher dimensions
Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends
More information11.6 DIRECTIONAL DERIVATIVES AND THE GRADIENT VECTOR
SECTION 11.6 DIRECTIONAL DERIVATIVES AND THE GRADIENT VECTOR 633 wit speed v o along te same line from te opposite direction toward te source, ten te frequenc of te sound eard b te observer is were c is
More informationNonparametric estimation of the average growth curve with general nonstationary error process
Nonparametric estimation of te average growt curve wit general nonstationary error process Karim Benenni, Mustapa Racdi To cite tis version: Karim Benenni, Mustapa Racdi. Nonparametric estimation of te
More information3.1 Extreme Values of a Function
.1 Etreme Values of a Function Section.1 Notes Page 1 One application of te derivative is finding minimum and maimum values off a grap. In precalculus we were only able to do tis wit quadratics by find
More informationTeaching Differentiation: A Rare Case for the Problem of the Slope of the Tangent Line
Teacing Differentiation: A Rare Case for te Problem of te Slope of te Tangent Line arxiv:1805.00343v1 [mat.ho] 29 Apr 2018 Roman Kvasov Department of Matematics University of Puerto Rico at Aguadilla Aguadilla,
More informationApplications of the van Trees inequality to non-parametric estimation.
Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc
More informationUniform Consistency for Nonparametric Estimators in Null Recurrent Time Series
Te University of Adelaide Scool of Economics Researc Paper No. 2009-26 Uniform Consistency for Nonparametric Estimators in Null Recurrent Time Series Jiti Gao, Degui Li and Dag Tjøsteim Te University of
More informationNew Fourth Order Quartic Spline Method for Solving Second Order Boundary Value Problems
MATEMATIKA, 2015, Volume 31, Number 2, 149 157 c UTM Centre for Industrial Applied Matematics New Fourt Order Quartic Spline Metod for Solving Second Order Boundary Value Problems 1 Osama Ala yed, 2 Te
More informationConvergence and Descent Properties for a Class of Multilevel Optimization Algorithms
Convergence and Descent Properties for a Class of Multilevel Optimization Algoritms Stepen G. Nas April 28, 2010 Abstract I present a multilevel optimization approac (termed MG/Opt) for te solution of
More informationFinancial Econometrics Prof. Massimo Guidolin
CLEFIN A.A. 2010/2011 Financial Econometrics Prof. Massimo Guidolin A Quick Review of Basic Estimation Metods 1. Were te OLS World Ends... Consider two time series 1: = { 1 2 } and 1: = { 1 2 }. At tis
More informationMore on generalized inverses of partitioned matrices with Banachiewicz-Schur forms
More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,
More informationAn approximation method using approximate approximations
Applicable Analysis: An International Journal Vol. 00, No. 00, September 2005, 1 13 An approximation metod using approximate approximations FRANK MÜLLER and WERNER VARNHORN, University of Kassel, Germany,
More informationRegularized Regression
Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize
More informationDerivation Of The Schwarzschild Radius Without General Relativity
Derivation Of Te Scwarzscild Radius Witout General Relativity In tis paper I present an alternative metod of deriving te Scwarzscild radius of a black ole. Te metod uses tree of te Planck units formulas:
More informationSECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY
(Section 3.2: Derivative Functions and Differentiability) 3.2.1 SECTION 3.2: DERIVATIVE FUNCTIONS and DIFFERENTIABILITY LEARNING OBJECTIVES Know, understand, and apply te Limit Definition of te Derivative
More informationLocal Orthogonal Polynomial Expansion (LOrPE) for Density Estimation
Local Ortogonal Polynomial Expansion (LOrPE) for Density Estimation Alex Trindade Dept. of Matematics & Statistics, Texas Tec University Igor Volobouev, Texas Tec University (Pysics Dept.) D.P. Amali Dassanayake,
More informationA New Diagnostic Test for Cross Section Independence in Nonparametric Panel Data Model
e University of Adelaide Scool of Economics Researc Paper No. 2009-6 October 2009 A New Diagnostic est for Cross Section Independence in Nonparametric Panel Data Model Jia Cen, Jiti Gao and Degui Li e
More informationEfficient algorithms for for clone items detection
Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire
More informationIEOR 165 Lecture 10 Distribution Estimation
IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat
More informationEstimating Peak Bone Mineral Density in Osteoporosis Diagnosis by Maximum Distribution
International Journal of Clinical Medicine Researc 2016; 3(5): 76-80 ttp://www.aascit.org/journal/ijcmr ISSN: 2375-3838 Estimating Peak Bone Mineral Density in Osteoporosis Diagnosis by Maximum Distribution
More informationSome Review Problems for First Midterm Mathematics 1300, Calculus 1
Some Review Problems for First Midterm Matematics 00, Calculus. Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd,
More informationDifferentiation. Area of study Unit 2 Calculus
Differentiation 8VCE VCEco Area of stud Unit Calculus coverage In tis ca 8A 8B 8C 8D 8E 8F capter Introduction to limits Limits of discontinuous, rational and brid functions Differentiation using first
More informationDedicated to the 70th birthday of Professor Lin Qun
Journal of Computational Matematics, Vol.4, No.3, 6, 4 44. ACCELERATION METHODS OF NONLINEAR ITERATION FOR NONLINEAR PARABOLIC EQUATIONS Guang-wei Yuan Xu-deng Hang Laboratory of Computational Pysics,
More informationUNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING
Statistica Sinica 15(2005), 73-98 UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Peter Hall 1 and Kee-Hoon Kang 1,2 1 Australian National University and 2 Hankuk University of Foreign Studies Abstract:
More informationOn the computation of wavenumber integrals in phase-shift migration of common-offset sections
Computation of offset-wavenumber integrals On te computation of wavenumber integrals in pase-sift migration of common-offset sections Xiniang Li and Gary F. Margrave ABSTRACT Te evaluation of wavenumber
More informationlecture 26: Richardson extrapolation
43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)
More informationFlapwise bending vibration analysis of double tapered rotating Euler Bernoulli beam by using the differential transform method
Meccanica 2006) 41:661 670 DOI 10.1007/s11012-006-9012-z Flapwise bending vibration analysis of double tapered rotating Euler Bernoulli beam by using te differential transform metod Ozge Ozdemir Ozgumus
More informationChapter 5 FINITE DIFFERENCE METHOD (FDM)
MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential
More informationAnalysis of Solar Generation and Weather Data in Smart Grid with Simultaneous Inference of Nonlinear Time Series
Te First International Worksop on Smart Cities and Urban Informatics 215 Analysis of Solar Generation and Weater Data in Smart Grid wit Simultaneous Inference of Nonlinear Time Series Yu Wang, Guanqun
More informationNonparametric regression on functional data: inference and practical aspects
arxiv:mat/0603084v1 [mat.st] 3 Mar 2006 Nonparametric regression on functional data: inference and practical aspects Frédéric Ferraty, André Mas, and Pilippe Vieu August 17, 2016 Abstract We consider te
More informationClick here to see an animation of the derivative
Differentiation Massoud Malek Derivative Te concept of derivative is at te core of Calculus; It is a very powerful tool for understanding te beavior of matematical functions. It allows us to optimize functions,
More information