Fast optimal bandwidth selection for kernel density estimation

Size: px
Start display at page:

Download "Fast optimal bandwidth selection for kernel density estimation"

Transcription

1 Fast optimal bandwidt selection for kernel density estimation Vikas Candrakant Raykar and Ramani Duraiswami Dept of computer science and UMIACS, University of Maryland, CollegePark Abstract We propose a computationally efficient ɛ exact approximation algoritm for univariate Gaussian kernel based density derivative estimation tat reduces te computational complexity from O(MN) to linear O(N +M) We apply te procedure to estimate te optimal bandwidt for kernel density estimation We demonstrate te speedup acieved on tis problem using te solve-te-equation plug-in metod, and on exploratory projection pursuit tecniques Introduction Kernel density estimation tecniques ] are widely used in various inference procedures in macine learning, data mining, pattern recognition, and computer vision Efficient use of tese metods requires te optimal selection of te bandwidt of te kernel A series of tecniques ave been proposed for data-driven bandwidt selection 4] Te most successful state of te art metods rely on te estimation of general integrated squared density derivative functionals Tis is te most computationally intensive task wit O(N ) cost, in addition to te O(N ) cost of computing te kernel density estimate Te core task is to efficiently compute an estimate of te density derivative Currently te most practically successful approac, solve-te-equation plugin metod 9] involves te numerical solution of a nonlinear equation Iterative metods to solve tis equation will involve repeated use of te density functional estimator for different bandwidts wic adds muc to te computational burden Estimation of density derivatives is needed in various oter applications like estimation of modes and inflexion points of densities ] and estimation of te derivatives of te projection index in projection pursuit algoritms 5] Optimal bandwidt selection A univariate random variable X on R as a density p if, for all Borel sets A of R, p(x)dx = Prx A] A Te task of density estimation is to estimate p from an iid sample x,, x N drawn from p Te estimate p : R (R) N R is called te density estimate Te most popular non-parametric metod for density estimation is te kernel density estimator (KDE) ] () p(x) = ( ) x xi K, N were K(u) is te kernel function and is te bandwidt Te kernel K(u) is required to satisfy te following two conditions: () K(u) and K(u)du = Te most widely used kernel is te Gaussian of zero mean and unit variance In tis case te KDE can be written as (3) p(x) = N e (x x i) / π R Te computational cost of evaluating Eq 3 at N points is O(N ), making it proibitively expensive Te Fast Gauss Transform (FGT) 3] is an approximation algoritm tat reduces te computational complexity to O(N), at te expense of reduced precision Yang et al ] presented an extension te improved fast Gauss transform(ifgt) tat scaled well wit dimensions Te main contribution of te current paper is te extension of te IFGT to accelerate te kernel density derivative estimate, and solve te optimal bandwidt problem Te integrated square error (ISE) between te estimate p(x) and te actual density p(x) is given by ISE( p, p) = R p(x) p(x)] dx Te ISE depends on a particular realization of N points It can be averaged over tese realizations to get te mean integrated squared error (MISE) An asymptotic large sample approximation for MISE, te AMISE, is usually derived via te Taylor s series Te A ere is for asymptotic Based on certain assumptions, te AMISE between te actual density and te estimate can be sown to be (4) AMISE( p, p) = N R(K) µ (K) R(p ), were, R(g) = R g(x) dx, µ (g) = R x g(x)dx, and p is te second derivative of te density p Te first term in Eq 4 is te integrated variance and te second term is te integrated squared bias Te bias is proportional to 4 wereas te variance is proportional to /N, wic leads to te well known bias-variance tradeoff Based on te AMISE expression te optimal bandwidt AMISE can be obtained by differentiating Eq 4 wrt bandwidt and setting it to zero ] /5 (5) AMISE = R(K) µ (K) R(p )N

2 However tis expression cannot be used directly since R(p ) depends on te second derivative of te density p In order to estimate R(p ) we will need an estimate of te density derivative A simple estimator for te density derivative can be obtained by taking te derivative of te KDE p(x) defined earlier ] Te r t density derivative estimate p (r) (x) can be written as (6) p (r) (x) = N r+ ( ) x K (r) xi, were K (r) is te r t derivative of te kernel K Te r t derivative of te Gaussian kernel k(u) is given by K (r) (u) = ( ) r H r (u)k(u), were H r (u) is te r t Hermite polynomial Hence te density derivative estimate can be written as (7) p (r) ( ) r ( ) x xi (x) = H r e (x x i) / πn r+ Te computational complexity of evaluating te r t derivative of te density estimate due to N points at M target locations is O(rN M) Based on similar analysis te optimal bandwidt r AMISE to estimate te rt density derivative can be sown to be ] R(K (8) r (r) ] /r+5 )(r + ) AMISE = µ (K) R(p (r+) )N 3 Estimation of density functionals Rater tan requiring te actual density derivative, metods for automatic bandwidt selection require te estimation of wat are known as density functionals Te general integrated squared density derivative functional is defined as R(p (s) ) = R p (s) (x) ] dx Using integration by parts, tis can be written in te following form, R(p (s) ) = ( ) s R p(s) (x)p(x)dx More specifically for even s we are interested in estimating density functionals of te form, ] (39) Φ r = p (r) (x)p(x)dx = E p (r) (X) R An estimator for Φ r is, (3) Φr = N p (r) (x i ) were p (r) (x i ) is te estimate of te r t derivative of te density p(x) at x = x i Using a kernel density derivative estimate for p (r) (x i ) (Eq 6) we ave (3) Φr = N r+ j= K (r) ( x i x j ) It sould be noted tat computation of Φ r is O(rN ) and ence can be very expensive if a direct algoritm is used Te optimal bandwidt for estimating te density functional is given by ] K (3) r (r) ] /r+3 () AMSE = µ (K)Φ r+ N 4 Solve-te-equation plug-in metod Te most successful among all current bandwidt selection metods 4], bot empirically and teoretically, is te solve-te-equation plug-in metod 4] We use te following version as described in 9] Te AMISE optimal bandwidt is te solution to te equation ] /5 R(K) (43) = µ (K) Φ, 4 γ()]n were Φ 4 γ()] is an estimate of Φ 4 = R(p ) using te pilot bandwidt γ(), wic depends on Te bandwidt is cosen suc tat it minimizes te asymptotic MSE for te estimation of Φ 4 and is K (4) ] /7 () (44) g MSE = µ (K)Φ 6 N Substituting for N, g MSE can be written as a function of as follows K (4) ] /7 ()µ (K)Φ 4 (45) g MSE = 5/7 AMISE R(K)Φ 6 Tis suggests tat we set ] /7 K (4) ()µ (K) Φ 4 (g ) (46) γ() = 5/7, R(K) Φ 6 (g ) were Φ 4 (g ) and Φ 6 (g ) are estimates of Φ 4 and Φ 6 using bandwidts g and g respectively Te bandwidts g and g are cosen suc tat it minimizes te asymptotic MSE (47) g = 6 π Φ6 N ] /7 g = 3 π Φ8 N ] /9 were Φ 6 and Φ 8 are estimators for Φ 6 and Φ 8 respectively We can use a similar strategy for estimation of Φ 6 and Φ 8 However tis problem will continue since te optimal bandwidt for estimating Φ r will depend on Φ r+ Te usual strategy is to estimate a Φ r at some stage, using a quick and simple estimate of bandwidt cosen wit reference to a parametric family, usually a normal density It as been observed tat as te number of stages increases, te variance of te bandwidt increases Te most common coice is to use only two stages If p is a normal density wit variance σ ten for

3 even r we can compute Φ r exactly ] An estimator of Φ r will use an estimate σ of te variance Based on tis we can estimate Φ 6 and Φ 8 as (48) Φ6 = 5 6 π σ 7, Φ8 = 5 3 π σ 9 Te two stage solve-te-equation metod using te Gaussian kernel can be summarized as follows () Compute an estimate σ of te standard deviation () Estimate te density functionals Φ 6 and Φ 8 using te normal scale rule (Eq 48) (3) Estimate te density functionals Φ 4 and Φ 6 using Eq 3 wit te optimal bandwidt g and g (Eq 47) (4) Te bandwidt is te solution to te nonlinear Eq 43 wic can be solved using any numerical routine like te Newton- Rapson metod Te main computational bottleneck is te estimation of Φ wic is of O(N ) 5 Fast density derivative estimation To estimate te density derivative at M target points, {y j R} M j=, we need to evaluate sums suc as (59) ( ) yj x i G r (y j ) = q i H r e (yj xi) / j =,, M, were {q i R} N will be referred to as te source weigts, R + is te bandwidt of te Gaussian and R + is te bandwidt of te Hermite polynomial Te computational complexity of evaluating Eq 59 is O(rNM) For any given ɛ > te algoritm computes an approximation Ĝr(y j ) suc tat (5) Ĝr(y j ) G r (y j ) Qɛ, were Q = N q i We call Ĝ r (y j ) an ɛ exact approximation to G r (y j ) We describe te algoritm briefly More details can be found in 8] For any point x R te Gaussian can be written as, (5) e y j x i / = e x i x / e y j x / e (x i x )(y j x )/ In Eq 5 for te tird exponential e (y j x )(x i x )/ te source and target are entangled Tis entanglement is separated using te Taylor s series expansion as follows e (x x )(y x )/ = (5) p k= k + error, ( ) k ( x x y x Using tis te Gaussian can now be factorized as p e y j x i / k ( ) ] k = e x i x / xi x k= ( ) ] k e y j x / yj x (53) + err ) k Te r t Hermite polynomial can be factorized as ] ( ) r/ yj x i r l ( ) m xi x H r = a lm l= m= ( ) r l m yj x (54), were (55) a lm = ( ) l+m r! l l!m!(r l m)! Using Eq 53 and 54, G r (y j ) after ignoring te error terms can be approximated as B km = k p Ĝ r (y j ) = k= r/ l= r l m= a lm B km e y j x / ( ) k ( ) r l m yj x yj x, were ( ) k ( ) m q i e x i x / xi x xi x Tus far, we ave used te Taylor s series expansion about a certain point x However if we use te same x for all te points we typically would require very ig truncation number p since te Taylor s series gives good approximation only in a small open interval around x We uniformly sub-divide te space into K intervals of lengt r x Te N source points are assigned into K clusters, S n for n =,, K wit c n being te center of eac cluster Te aggregated coefficients are now computed for eac cluster and te total contribution from all te clusters is summed up Since te Gaussian decays very rapidly a furter speedup is acieved if we ignore all te sources belonging to a cluster if te cluster is greater tan a certain distance from te target point, ie, y j c n > r y Substituting = and = te final algoritm can be written as (56) Ĝ r (y j ) = p r/ r l a lm B n km y j c n r y k= l= m= ) k+r l m e y j c n / ( yj c n Bkm n = ( ) k+m q i e xi cn / xi c n x i S n 5 Computational and space complexity Computing te coefficients Bkm n for all te clusters is O(prN) Evaluation of Ĝ r (y j ) at M points is O(npr M), were n if te maximum number of neigbor clusters wic influence y j Hence te total computational complexity is O(prN + npr M) For eac cluster we need to store all te pr coefficients Hence te storage needed is of O(prK + N + M)

4 Time (sec) 6 4 Direct Fast Max abs error / Q 6 8 Desired error Acutal error N N Figure : Running time in seconds and maximum absolute error relative to Q for te direct and te fast metods as a function of N N = M source and te target points were uniformly distributed, ] =, r = 4, and ɛ = 6 ] 5 Coosing te parameters Given any ɛ >, we want to coose te following parameters, K (te number of intervals), r y (te cut off radius for eac cluster), and p (te truncation number) suc tat for any target point y j, Ĝr(y j ) G r (y j ) Qɛ We give te final results for te coice of te parameters Te detailed derivations can be seen in te tecnical report 8] Te number of clusters K is cosen suc tat r x = / Te cutoff radius r y is given by = r x + ln ( r!/ɛ) Te truncation number p r y is cosen suc tat b=min (b,r y ), a=rx] = r! p! ɛ, were, ( ab ) p e (a b) /4, and b = a+ a +8p 53 Numerical experiments Te algoritm was programmed in C++ and was run on a 6 GHz Pentium M processor wit 5Mb of RAM Figure sows te running time and te maximum absolute error relative to Q for bot te direct and te fast metods as a function of N = M We see tat te running time of te fast metod grows linearly as te number of sources and targets increases, wile tat of te direct evaluation grows quadratically 6 Speedup acieved for bandwidt estimation We demonstrate te speedup acieved on te mixture of normal densities used by Marron and Wand 6] Te family of normal mixture densities is extremely ric and, in fact any density can be approximated arbitrarily well by a member of tis family Fig sows a sample of four different densities out of te fifteen densities wic were used by te autors in 6] as typical representatives of te densities likely to be encountered in real data situations We sampled N = 5, points from eac density Te AMISE optimal bandwidt was estimated using bot te direct metods and te proposed fast metod Table sows te speedup acieved and te (c) Figure : Four normal mixture densities from Marron and Wand 6]Te solid line sows te actual density and te dotted line is te estimated density using te optimal bandwidt absolute relative error We also used te Adult database from te UCI macine learning repository 7] Te database extracted from te census bureau database contains 3,56 training instances wit 4 attributes per instance Table sows te speedup acieved and te absolute relative error for two of te continuous attributes 7 Projection Pursuit Projection Pursuit (PP) is an exploratory tecnique for visualizing and analyzing large multivariate datasets 5] Te idea of PP is to searc for projections from ig- to low-dimensional space tat are most interesting Te PP algoritm for finding te most interesting one-dimensional subspace is as follows First project eac data point onto te direction vector a R d, ie, z i = a T x i Compute te univariate nonparametric kernel density estimate, p, of te projected points z i Compute te projection index I based on te density estimate Locally optimize over te te coice of a, to get te most interesting projection of te data Repeat from a new initial projection to get a different view Te projection index is designed to reveal specific structure in te data, like clusters, outliers, or smoot manifolds Te entropy index based on Rényi s order- entropy is given by I = p(z) log p(z)dz Te density of zero mean and unit variance wic uniquely minimizes tis is te standard normal density Tus te projection index finds te direction wic is most non-normal In practice we need to use an estimate p of te te true density p, for example te KDE using te Gaussian kernel Tus we ave an estimate of te entropy index as follows (77) Î = log p(z)p(z)dz = N (d) log p(a T x i )

5 Table : Te running time in seconds for te direct and te fast metods for four normal mixture densities of Marron and Wand 6] (See Fig ) Te absolute relative error is defined as direct fast / direct For te fast metod we used ɛ = 3 Density direct fast T direct (sec) T fast (sec) Speedup Abs Relative Error e e-6 (c) e-6 (d) e-6 Table : Optimal bandwidt estimation for two continuous attributes for te Adult database 7] Attribute direct fast T direct (sec) T fast (sec) Speedup Abs Relative Error Age e-5 fnlwgt e-6 Te entropy index Î as to be optimized over te d-dimensional vector a subject to te constraint tat a = Te optimization function will require te gradient of te objective function For te index defined above te gradient can be written as d da Î] = N p (a T x i ) N p(a T x i ) x i For te PP te computational burden is greatly reduced if we use te proposed fast metod Te computational burden is reduced in te following tree instances () Computation of te kernel density estimate, () estimation of te optimal bandwidt, and (3) computation of te first derivative of te kernel density estimate, wic is required in te optimization procedure Fig 3 sows an example of te PP algoritm to segment an image based on color 8 Conclusions We proposed an fast ɛ exact algoritm for kernel density derivative estimation wic reduced te computational complexity from O(N ) to O(N) We demonstrated te speedup acieved for optimal bandwidt estimation bot on simulated as well as real data A extended version of tis paper is available as a tecnical report 8] References ] P K Battacarya Estimation of a probability density function and its derivatives Sankya, Series A, 9:373 38, 967 ] K Fukunaga and L Hostetler Te estimation of te gradient of a density function, wit applications in pattern recognition IEEE Trans Info Teory, ():3 4, 975 3] L Greengard and J Strain Te fast Gauss transform SIAM J Sci Stat Comput, ():79 94, 99 4] M C Jones, J S Marron, and S J Seater A brief survey of bandwidt selection for density estimation J Amer Stat Assoc, 9(433):4 47, Marc 996 5] M C Jones and R Sibson Wat is projection pursuit? J R Statist Soc A, 5: 36, (c) 3 Figure 3: Te original image Te centered and scaled RGB space Eac pixel in te image is a point in te RGB space (c) KDE of te projection of te pixels on te most interesting direction found by projection pursuit (d) Te assignment of te pixels to te tree modes in te KDE 6] J S Marron and M P Wand Exact mean integrated squared error Te Ann of Stat, ():7 736, 99 7] D J Newman, S Hettic, C L Blake, and C J Merz UCI repository of macine learning databases ttp://wwwicsuciedu/ mlearn/mlrepositorytml, 998 8] V C Raykar and R Duraiswami Very fast optimal bandwidt selection for univariate kernel density estimation Tecnical Report CS-TR-4774, University of Maryland, CollegePark, 5 9] SJ Seater and MC Jones A reliable data-based bandwidt selection metod for kernel density estimation J Royal Statist Soc B, 53:683 69, 99 ] M P Wand and M C Jones Kernel Smooting Capman and Hall, 995 ] C Yang, R Duraiswami, N Gumerov, and L Davis Improved fast Gauss transform and efficient kernel density estimation In IEEE Int Conf on Computer Vision, pages , (d) 4

Very fast optimal bandwidth selection for univariate kernel density estimation

Very fast optimal bandwidth selection for univariate kernel density estimation Very fast optimal bandwidt selection for univariate kernel density estimation VIKAS CHANDAKANT AYKA and AMANI DUAISWAMI Perceptual Interfaces and eality Laboratory Department of Computer Science and Institute

More information

Fast Exact Univariate Kernel Density Estimation

Fast Exact Univariate Kernel Density Estimation Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis

More information

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

Improved Fast Gauss Transform and Efficient Kernel Density Estimation

Improved Fast Gauss Transform and Efficient Kernel Density Estimation Improved Fast Gauss Transform and Efficient Kernel Density Estimation Cangjiang Yang, Ramani Duraiswami, Nail A. Gumerov and Larry Davis Perceptual Interfaces and Reality Laboratory University of Maryland,

More information

Scalable machine learning for massive datasets: Fast summation algorithms

Scalable machine learning for massive datasets: Fast summation algorithms Scalable machine learning for massive datasets: Fast summation algorithms Getting good enough solutions as fast as possible Vikas Chandrakant Raykar vikas@cs.umd.edu University of Maryland, CollegePark

More information

Kernel Density Estimation

Kernel Density Estimation Kernel Density Estimation Univariate Density Estimation Suppose tat we ave a random sample of data X 1,..., X n from an unknown continuous distribution wit probability density function (pdf) f(x) and cumulative

More information

Boosting Kernel Density Estimates: a Bias Reduction. Technique?

Boosting Kernel Density Estimates: a Bias Reduction. Technique? Boosting Kernel Density Estimates: a Bias Reduction Tecnique? Marco Di Marzio Dipartimento di Metodi Quantitativi e Teoria Economica, Università di Cieti-Pescara, Viale Pindaro 42, 65127 Pescara, Italy

More information

Kernel Density Based Linear Regression Estimate

Kernel Density Based Linear Regression Estimate Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency

More information

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

Improved Fast Gauss Transform. Fast Gauss Transform (FGT)

Improved Fast Gauss Transform. Fast Gauss Transform (FGT) 10/11/011 Improved Fast Gauss Transform Based on work by Changjiang Yang (00), Vikas Raykar (005), and Vlad Morariu (007) Fast Gauss Transform (FGT) Originally proposed by Greengard and Strain (1991) to

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

Data-Based Optimal Bandwidth for Kernel Density Estimation of Statistical Samples

Data-Based Optimal Bandwidth for Kernel Density Estimation of Statistical Samples Commun. Teor. Pys. 70 (208) 728 734 Vol. 70 No. 6 December 208 Data-Based Optimal Bandwidt for Kernel Density Estimation of Statistical Samples Zen-Wei Li ( 李振伟 ) 2 and Ping He ( 何平 ) 3 Center for Teoretical

More information

Logistic Kernel Estimator and Bandwidth Selection. for Density Function

Logistic Kernel Estimator and Bandwidth Selection. for Density Function International Journal of Contemporary Matematical Sciences Vol. 13, 2018, no. 6, 279-286 HIKARI Ltd, www.m-ikari.com ttps://doi.org/10.12988/ijcms.2018.81133 Logistic Kernel Estimator and Bandwidt Selection

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

Improved Fast Gauss Transform. (based on work by Changjiang Yang and Vikas Raykar)

Improved Fast Gauss Transform. (based on work by Changjiang Yang and Vikas Raykar) Improved Fast Gauss Transform (based on work by Changjiang Yang and Vikas Raykar) Fast Gauss Transform (FGT) Originally proposed by Greengard and Strain (1991) to efficiently evaluate the weighted sum

More information

Handling Missing Data on Asymmetric Distribution

Handling Missing Data on Asymmetric Distribution International Matematical Forum, Vol. 8, 03, no. 4, 53-65 Handling Missing Data on Asymmetric Distribution Amad M. H. Al-Kazale Department of Matematics, Faculty of Science Al-albayt University, Al-Mafraq-Jordan

More information

Solving Continuous Linear Least-Squares Problems by Iterated Projection

Solving Continuous Linear Least-Squares Problems by Iterated Projection Solving Continuous Linear Least-Squares Problems by Iterated Projection by Ral Juengling Department o Computer Science, Portland State University PO Box 75 Portland, OR 977 USA Email: juenglin@cs.pdx.edu

More information

ON RENYI S ENTROPY ESTIMATION WITH ONE-DIMENSIONAL GAUSSIAN KERNELS. Septimia Sarbu

ON RENYI S ENTROPY ESTIMATION WITH ONE-DIMENSIONAL GAUSSIAN KERNELS. Septimia Sarbu ON RENYI S ENTROPY ESTIMATION WITH ONE-DIMENSIONAL GAUSSIAN KERNELS Septimia Sarbu Department of Signal Processing Tampere University of Tecnology PO Bo 527 FI-330 Tampere, Finland septimia.sarbu@tut.fi

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

The Verlet Algorithm for Molecular Dynamics Simulations

The Verlet Algorithm for Molecular Dynamics Simulations Cemistry 380.37 Fall 2015 Dr. Jean M. Standard November 9, 2015 Te Verlet Algoritm for Molecular Dynamics Simulations Equations of motion For a many-body system consisting of N particles, Newton's classical

More information

Kernel Smoothing and Tolerance Intervals for Hierarchical Data

Kernel Smoothing and Tolerance Intervals for Hierarchical Data Clemson University TigerPrints All Dissertations Dissertations 12-2016 Kernel Smooting and Tolerance Intervals for Hierarcical Data Cristoper Wilson Clemson University, cwilso6@clemson.edu Follow tis and

More information

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE

DEPARTMENT MATHEMATIK SCHWERPUNKT MATHEMATISCHE STATISTIK UND STOCHASTISCHE PROZESSE U N I V E R S I T Ä T H A M B U R G A note on residual-based empirical likeliood kernel density estimation Birte Musal and Natalie Neumeyer Preprint No. 2010-05 May 2010 DEPARTMENT MATHEMATIK SCHWERPUNKT

More information

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

Computational tractability of machine learning algorithms for tall fat data

Computational tractability of machine learning algorithms for tall fat data Computational tractability of machine learning algorithms for tall fat data Getting good enough solutions as fast as possible Vikas Chandrakant Raykar vikas@cs.umd.edu University of Maryland, CollegePark

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

Bootstrap confidence intervals in nonparametric regression without an additive model

Bootstrap confidence intervals in nonparametric regression without an additive model Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap

More information

Applications of the van Trees inequality to non-parametric estimation.

Applications of the van Trees inequality to non-parametric estimation. Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc

More information

Artificial Neural Network Model Based Estimation of Finite Population Total

Artificial Neural Network Model Based Estimation of Finite Population Total International Journal of Science and Researc (IJSR), India Online ISSN: 2319-7064 Artificial Neural Network Model Based Estimation of Finite Population Total Robert Kasisi 1, Romanus O. Odiambo 2, Antony

More information

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

New Streamfunction Approach for Magnetohydrodynamics

New Streamfunction Approach for Magnetohydrodynamics New Streamfunction Approac for Magnetoydrodynamics Kab Seo Kang Brooaven National Laboratory, Computational Science Center, Building 63, Room, Upton NY 973, USA. sang@bnl.gov Summary. We apply te finite

More information

LECTURE 14 NUMERICAL INTEGRATION. Find

LECTURE 14 NUMERICAL INTEGRATION. Find LECTURE 14 NUMERCAL NTEGRATON Find b a fxdx or b a vx ux fx ydy dx Often integration is required. However te form of fx may be suc tat analytical integration would be very difficult or impossible. Use

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

Local Orthogonal Polynomial Expansion (LOrPE) for Density Estimation

Local Orthogonal Polynomial Expansion (LOrPE) for Density Estimation Local Ortogonal Polynomial Expansion (LOrPE) for Density Estimation Alex Trindade Dept. of Matematics & Statistics, Texas Tec University Igor Volobouev, Texas Tec University (Pysics Dept.) D.P. Amali Dassanayake,

More information

arxiv: v1 [math.pr] 28 Dec 2018

arxiv: v1 [math.pr] 28 Dec 2018 Approximating Sepp s constants for te Slepian process Jack Noonan a, Anatoly Zigljavsky a, a Scool of Matematics, Cardiff University, Cardiff, CF4 4AG, UK arxiv:8.0v [mat.pr] 8 Dec 08 Abstract Slepian

More information

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys American Journal of Teoretical and Applied Statistics 2018; 7(3): 92-101 ttp://www.sciencepublisinggroup.com/j/ajtas doi: 10.11648/j.ajtas.20180703.11 ISSN: 2326-8999 (Print); ISSN: 2326-9006 (Online)

More information

Exponential Concentration for Mutual Information Estimation with Application to Forests

Exponential Concentration for Mutual Information Estimation with Application to Forests Exponential Concentration for Mutual Information Estimation wit Application to Forests Han Liu Department of Operations Researc and Financial Engineering Princeton University, NJ 8544 anliu@princeton.edu

More information

An Empirical Bayesian interpretation and generalization of NL-means

An Empirical Bayesian interpretation and generalization of NL-means Computer Science Tecnical Report TR2010-934, October 2010 Courant Institute of Matematical Sciences, New York University ttp://cs.nyu.edu/web/researc/tecreports/reports.tml An Empirical Bayesian interpretation

More information

VARIANCE ESTIMATION FOR COMBINED RATIO ESTIMATOR

VARIANCE ESTIMATION FOR COMBINED RATIO ESTIMATOR Sankyā : Te Indian Journal of Statistics 1995, Volume 57, Series B, Pt. 1, pp. 85-92 VARIANCE ESTIMATION FOR COMBINED RATIO ESTIMATOR By SANJAY KUMAR SAXENA Central Soil and Water Conservation Researc

More information

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative

Mathematics 5 Worksheet 11 Geometry, Tangency, and the Derivative Matematics 5 Workseet 11 Geometry, Tangency, and te Derivative Problem 1. Find te equation of a line wit slope m tat intersects te point (3, 9). Solution. Te equation for a line passing troug a point (x

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

MATH745 Fall MATH745 Fall

MATH745 Fall MATH745 Fall MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext

More information

IEOR 165 Lecture 10 Distribution Estimation

IEOR 165 Lecture 10 Distribution Estimation IEOR 165 Lecture 10 Distribution Estimation 1 Motivating Problem Consider a situation were we ave iid data x i from some unknown distribution. One problem of interest is estimating te distribution tat

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING

EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING Statistica Sinica 13(2003), 641-653 EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING J. K. Kim and R. R. Sitter Hankuk University of Foreign Studies and Simon Fraser University Abstract:

More information

Efficient Kernel Machines Using the Improved Fast Gauss Transform

Efficient Kernel Machines Using the Improved Fast Gauss Transform Efficient Kernel Machines Using the Improved Fast Gauss Transform Changjiang Yang, Ramani Duraiswami and Larry Davis Department of Computer Science University of Maryland College Park, MD 20742 {yangcj,ramani,lsd}@umiacs.umd.edu

More information

Vikas Chandrakant Raykar Doctor of Philosophy, 2007

Vikas Chandrakant Raykar Doctor of Philosophy, 2007 ABSTRACT Title of dissertation: SCALABLE MACHINE LEARNING FOR MASSIVE DATASETS : FAST SUMMATION ALGORITHMS Vikas Chandrakant Raykar Doctor of Philosophy, 2007 Dissertation directed by: Professor Dr. Ramani

More information

Copyright c 2008 Kevin Long

Copyright c 2008 Kevin Long Lecture 4 Numerical solution of initial value problems Te metods you ve learned so far ave obtained closed-form solutions to initial value problems. A closedform solution is an explicit algebriac formula

More information

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction

INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION. 1. Introduction INFINITE ORDER CROSS-VALIDATED LOCAL POLYNOMIAL REGRESSION PETER G. HALL AND JEFFREY S. RACINE Abstract. Many practical problems require nonparametric estimates of regression functions, and local polynomial

More information

Robust Average Derivative Estimation. February 2007 (Preliminary and Incomplete Do not quote without permission)

Robust Average Derivative Estimation. February 2007 (Preliminary and Incomplete Do not quote without permission) Robust Average Derivative Estimation Marcia M.A. Scafgans Victoria inde-wals y February 007 (Preliminary and Incomplete Do not quote witout permission) Abstract. Many important models, suc as index models

More information

Bootstrap prediction intervals for Markov processes

Bootstrap prediction intervals for Markov processes arxiv: arxiv:0000.0000 Bootstrap prediction intervals for Markov processes Li Pan and Dimitris N. Politis Li Pan Department of Matematics University of California San Diego La Jolla, CA 92093-0112, USA

More information

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator.

Lecture XVII. Abstract We introduce the concept of directional derivative of a scalar function and discuss its relation with the gradient operator. Lecture XVII Abstract We introduce te concept of directional derivative of a scalar function and discuss its relation wit te gradient operator. Directional derivative and gradient Te directional derivative

More information

Exercises for numerical differentiation. Øyvind Ryan

Exercises for numerical differentiation. Øyvind Ryan Exercises for numerical differentiation Øyvind Ryan February 25, 2013 1. Mark eac of te following statements as true or false. a. Wen we use te approximation f (a) (f (a +) f (a))/ on a computer, we can

More information

Analysis of Solar Generation and Weather Data in Smart Grid with Simultaneous Inference of Nonlinear Time Series

Analysis of Solar Generation and Weather Data in Smart Grid with Simultaneous Inference of Nonlinear Time Series Te First International Worksop on Smart Cities and Urban Informatics 215 Analysis of Solar Generation and Weater Data in Smart Grid wit Simultaneous Inference of Nonlinear Time Series Yu Wang, Guanqun

More information

THE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS. L. Trautmann, R. Rabenstein

THE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS. L. Trautmann, R. Rabenstein Worksop on Transforms and Filter Banks (WTFB),Brandenburg, Germany, Marc 999 THE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS L. Trautmann, R. Rabenstein Lerstul

More information

Digital Filter Structures

Digital Filter Structures Digital Filter Structures Te convolution sum description of an LTI discrete-time system can, in principle, be used to implement te system For an IIR finite-dimensional system tis approac is not practical

More information

Long Term Time Series Prediction with Multi-Input Multi-Output Local Learning

Long Term Time Series Prediction with Multi-Input Multi-Output Local Learning Long Term Time Series Prediction wit Multi-Input Multi-Output Local Learning Gianluca Bontempi Macine Learning Group, Département d Informatique Faculté des Sciences, ULB, Université Libre de Bruxelles

More information

Te comparison of dierent models M i is based on teir relative probabilities, wic can be expressed, again using Bayes' teorem, in terms of prior probab

Te comparison of dierent models M i is based on teir relative probabilities, wic can be expressed, again using Bayes' teorem, in terms of prior probab To appear in: Advances in Neural Information Processing Systems 9, eds. M. C. Mozer, M. I. Jordan and T. Petsce. MIT Press, 997 Bayesian Model Comparison by Monte Carlo Caining David Barber D.Barber@aston.ac.uk

More information

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a?

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a? Solutions to Test 1 Fall 016 1pt 1. Te grap of a function f(x) is sown at rigt below. Part I. State te value of eac limit. If a limit is infinite, state weter it is or. If a limit does not exist (but is

More information

The Laplace equation, cylindrically or spherically symmetric case

The Laplace equation, cylindrically or spherically symmetric case Numerisce Metoden II, 7 4, und Übungen, 7 5 Course Notes, Summer Term 7 Some material and exercises Te Laplace equation, cylindrically or sperically symmetric case Electric and gravitational potential,

More information

Calculus I Practice Exam 1A

Calculus I Practice Exam 1A Calculus I Practice Exam A Calculus I Practice Exam A Tis practice exam empasizes conceptual connections and understanding to a greater degree tan te exams tat are usually administered in introductory

More information

UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING

UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Statistica Sinica 15(2005), 73-98 UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Peter Hall 1 and Kee-Hoon Kang 1,2 1 Australian National University and 2 Hankuk University of Foreign Studies Abstract:

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems

Optimal parameters for a hierarchical grid data structure for contact detection in arbitrarily polydisperse particle systems Comp. Part. Mec. 04) :357 37 DOI 0.007/s4057-04-000-9 Optimal parameters for a ierarcical grid data structure for contact detection in arbitrarily polydisperse particle systems Dinant Krijgsman Vitaliy

More information

Fast large scale Gaussian process regression using the improved fast Gauss transform

Fast large scale Gaussian process regression using the improved fast Gauss transform Fast large scale Gaussian process regression using the improved fast Gauss transform VIKAS CHANDRAKANT RAYKAR and RAMANI DURAISWAMI Perceptual Interfaces and Reality Laboratory Department of Computer Science

More information

Improved Fast Gauss Transform

Improved Fast Gauss Transform Improved Fast Gauss Transform Changjiang Yang Department of Computer Science University of Maryland Fast Gauss Transform (FGT) Originally proposed by Greengard and Strain (1991) to efficiently evaluate

More information

Runge-Kutta methods. With orders of Taylor methods yet without derivatives of f (t, y(t))

Runge-Kutta methods. With orders of Taylor methods yet without derivatives of f (t, y(t)) Runge-Kutta metods Wit orders of Taylor metods yet witout derivatives of f (t, y(t)) First order Taylor expansion in two variables Teorem: Suppose tat f (t, y) and all its partial derivatives are continuous

More information

Learning based super-resolution land cover mapping

Learning based super-resolution land cover mapping earning based super-resolution land cover mapping Feng ing, Yiang Zang, Giles M. Foody IEEE Fellow, Xiaodong Xiuua Zang, Siming Fang, Wenbo Yun Du is work was supported in part by te National Basic Researc

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2

1 1. Rationalize the denominator and fully simplify the radical expression 3 3. Solution: = 1 = 3 3 = 2 MTH - Spring 04 Exam Review (Solutions) Exam : February 5t 6:00-7:0 Tis exam review contains questions similar to tose you sould expect to see on Exam. Te questions included in tis review, owever, are

More information

DELFT UNIVERSITY OF TECHNOLOGY Faculty of Electrical Engineering, Mathematics and Computer Science

DELFT UNIVERSITY OF TECHNOLOGY Faculty of Electrical Engineering, Mathematics and Computer Science DELFT UNIVERSITY OF TECHNOLOGY Faculty of Electrical Engineering, Matematics and Computer Science. ANSWERS OF THE TEST NUMERICAL METHODS FOR DIFFERENTIAL EQUATIONS (WI3097 TU) Tuesday January 9 008, 9:00-:00

More information

Preconditioning in H(div) and Applications

Preconditioning in H(div) and Applications 1 Preconditioning in H(div) and Applications Douglas N. Arnold 1, Ricard S. Falk 2 and Ragnar Winter 3 4 Abstract. Summarizing te work of [AFW97], we sow ow to construct preconditioners using domain decomposition

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

CHOOSING A KERNEL FOR CROSS-VALIDATION. A Dissertation OLGA SAVCHUK

CHOOSING A KERNEL FOR CROSS-VALIDATION. A Dissertation OLGA SAVCHUK CHOOSING A KERNEL FOR CROSS-VALIDATION A Dissertation by OLGA SAVCHUK Submitted to te Office of Graduate Studies of Texas A&M University in partial fulfillment of te requirements for te degree of DOCTOR

More information

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems

5 Ordinary Differential Equations: Finite Difference Methods for Boundary Problems 5 Ordinary Differential Equations: Finite Difference Metods for Boundary Problems Read sections 10.1, 10.2, 10.4 Review questions 10.1 10.4, 10.8 10.9, 10.13 5.1 Introduction In te previous capters we

More information

STAT Homework X - Solutions

STAT Homework X - Solutions STAT-36700 Homework X - Solutions Fall 201 November 12, 201 Tis contains solutions for Homework 4. Please note tat we ave included several additional comments and approaces to te problems to give you better

More information

HOMEWORK HELP 2 FOR MATH 151

HOMEWORK HELP 2 FOR MATH 151 HOMEWORK HELP 2 FOR MATH 151 Here we go; te second round of omework elp. If tere are oters you would like to see, let me know! 2.4, 43 and 44 At wat points are te functions f(x) and g(x) = xf(x)continuous,

More information

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here!

Precalculus Test 2 Practice Questions Page 1. Note: You can expect other types of questions on the test than the ones presented here! Precalculus Test 2 Practice Questions Page Note: You can expect oter types of questions on te test tan te ones presented ere! Questions Example. Find te vertex of te quadratic f(x) = 4x 2 x. Example 2.

More information

Math 1241 Calculus Test 1

Math 1241 Calculus Test 1 February 4, 2004 Name Te first nine problems count 6 points eac and te final seven count as marked. Tere are 120 points available on tis test. Multiple coice section. Circle te correct coice(s). You do

More information

A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation

A Locally Adaptive Transformation Method of Boundary Correction in Kernel Density Estimation A Locally Adaptive Transformation Metod of Boundary Correction in Kernel Density Estimation R.J. Karunamuni a and T. Alberts b a Department of Matematical and Statistical Sciences University of Alberta,

More information

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h Lecture Numerical differentiation Introduction We can analytically calculate te derivative of any elementary function, so tere migt seem to be no motivation for calculating derivatives numerically. However

More information

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS

EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Statistica Sinica 24 2014, 395-414 doi:ttp://dx.doi.org/10.5705/ss.2012.064 EFFICIENCY OF MODEL-ASSISTED REGRESSION ESTIMATORS IN SAMPLE SURVEYS Jun Sao 1,2 and Seng Wang 3 1 East Cina Normal University,

More information

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY APPLICATIONES MATHEMATICAE 36, (29), pp. 2 Zbigniew Ciesielski (Sopot) Ryszard Zieliński (Warszawa) POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY Abstract. Dvoretzky

More information

The Complexity of Computing the MCD-Estimator

The Complexity of Computing the MCD-Estimator Te Complexity of Computing te MCD-Estimator Torsten Bernolt Lerstul Informatik 2 Universität Dortmund, Germany torstenbernolt@uni-dortmundde Paul Fiscer IMM, Danisc Tecnical University Kongens Lyngby,

More information

The total error in numerical differentiation

The total error in numerical differentiation AMS 147 Computational Metods and Applications Lecture 08 Copyrigt by Hongyun Wang, UCSC Recap: Loss of accuracy due to numerical cancellation A B 3, 3 ~10 16 In calculating te difference between A and

More information

[db]

[db] Blind Source Separation based on Second-Order Statistics wit Asymptotically Optimal Weigting Arie Yeredor Department of EE-Systems, el-aviv University P.O.Box 3900, el-aviv 69978, Israel Abstract Blind

More information

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t). . Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd, periodic function tat as been sifted upwards, so we will use

More information

4.2 - Richardson Extrapolation

4.2 - Richardson Extrapolation . - Ricardson Extrapolation. Small-O Notation: Recall tat te big-o notation used to define te rate of convergence in Section.: Definition Let x n n converge to a number x. Suppose tat n n is a sequence

More information

NONLINEAR SYSTEMS IDENTIFICATION USING THE VOLTERRA MODEL. Georgeta Budura

NONLINEAR SYSTEMS IDENTIFICATION USING THE VOLTERRA MODEL. Georgeta Budura NONLINEAR SYSTEMS IDENTIFICATION USING THE VOLTERRA MODEL Georgeta Budura Politenica University of Timisoara, Faculty of Electronics and Telecommunications, Comm. Dep., georgeta.budura@etc.utt.ro Abstract:

More information

Efficient algorithms for for clone items detection

Efficient algorithms for for clone items detection Efficient algoritms for for clone items detection Raoul Medina, Caroline Noyer, and Olivier Raynaud Raoul Medina, Caroline Noyer and Olivier Raynaud LIMOS - Université Blaise Pascal, Campus universitaire

More information

Gradient Descent etc.

Gradient Descent etc. 1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )

More information

An Analysis of Locally Defined Principal Curves and Surfaces

An Analysis of Locally Defined Principal Curves and Surfaces An Analysis of Locally Defined Principal Curves and Surfaces James McQueen Department of Statistics, University of Wasington Seattle, WA, 98195, USA Abstract Principal curves are generally defined as smoot

More information

Numerical Analysis MTH603. dy dt = = (0) , y n+1. We obtain yn. Therefore. and. Copyright Virtual University of Pakistan 1

Numerical Analysis MTH603. dy dt = = (0) , y n+1. We obtain yn. Therefore. and. Copyright Virtual University of Pakistan 1 Numerical Analysis MTH60 PREDICTOR CORRECTOR METHOD Te metods presented so far are called single-step metods, were we ave seen tat te computation of y at t n+ tat is y n+ requires te knowledge of y n only.

More information