Exponential Concentration for Mutual Information Estimation with Application to Forests

Size: px
Start display at page:

Download "Exponential Concentration for Mutual Information Estimation with Application to Forests"

Transcription

1 Exponential Concentration for Mutual Information Estimation wit Application to Forests Han Liu Department of Operations Researc and Financial Engineering Princeton University, NJ 8544 Jon Lafferty Department of Computer Science Department of Statistics University of Cicago, IL 6637 Larry Wasserman Department of Statistics Macine Learning Department Carnegie Mellon University, PA 53 Abstract We prove a new exponential concentration inequality for a plug-in estimator of te Sannon mutual information. Previous results on mutual information estimation only bounded expected error. Te advantage of aving te exponential inequality is tat, combined wit te union bound, we can guarantee accurate estimators of te mutual information for many pairs of random variables simultaneously. As an application, we sow ow to use suc a result to optimally estimate te density function and grap of a distribution wic is Markov to a forest grap. Introduction We consider te problem of nonparametrically estimating te Sannon mutual information between two random variables. Let and be two random variables wit domains and and joint density px, x. Te mutual information between and is px, x I ; := px, x log dx dx = H H H,, px px were H, = px, x log px, x dx dx and similarly for H and H are te corresponding Sannon entropies 4]. Te mutual information is a measure of dependence between and. To estimate I ; well, it suffices to estimate H, := Hp. A simple way to estimate te Sannon entropy is to use a kernel density estimator DE,, 9, 5,, 7], i.e., te densities px, y, px, and py are separately estimated from samples and te estimated densities are used to calculate te entropy. Alternative metods involve estimation of te entropies using spacings 5, 6, 3], k-nearest neigbors, ], te Edgewort expansion 4], and convex optimization 7]. More discussions can be found in te survey articles, 9]. Tere ave been many recent developments in te problem of estimating Sannon entropy and related quantities as well as application of tese results to macine learning problems 8,, 8, 6]. Under weak conditions, it as been sown tat tere are estimators tat acieve te parametric n-rate of convergence in mean squared error MSE, were n is te sample size. In tis paper, we construct an estimator wit tis rate, but we also prove an exponential concentration inequality for te estimator. More specifically, we sow tat our estimator Ĥ of Hp satisfies sup P Ĥ Hp > ɛ exp nɛ p Σ 36κ.

2 were Σ is a nonparametric class of distributions defined in Section and κ is a constant. To te best of our knowledge, tis is te first suc exponential inequality for nonparametric Sannon entropy and mutual information estimation. Te advantage of tis result, over te usual results wic state tat E Ĥ Hp = On, is tat we can apply te union bound and tus guarantee accurate mutual information estimation for many pairs of random variables simultaneously. As an application, we consider forest density estimation 5], wic, in a d-dimenionsal problem, requires estimating dd mutual informations in order to apply te Cow-Liu algoritm. As long as log d n as n, we can estimate te forest grap well, even if d = dn increases wit n exponentially fast. Te rest of tis paper is organized as follows. Te assumptions and estimator are given in Section. Te main teoretical analysis is in Section 3. In Section 4 we sow ow to apply te result to forest density estimation. Some discussion and possible extensions are provided in te last section. Estimator and Main Result Let =, R be a random vector wit density px := px, x and let x,..., x n R be a random sample from p. In tis paper, we only consider te case of bounded domain =, ]. We want to estimate te Sannon entropy Hp = px log pxdx.. We start wit some assumptions on te density function px, x. Assumption. Density assumption. We assume te density px, x belongs to a nd-order Hölder class Σ κ, L and is bounded away from zero and infinity. In particular, tere exist constants κ, κ < κ min px max px κ <,. x x and for any x, x T, tere exists a constant L suc tat, for any u, v T px u, x v px, x px, x u px, x v Lu v..3 x x Assumption. Boundary assumption. If {x n } is any sequence converging to a boundary point x, we require te density px as vanising first order partial derivatives: px n px n lim = lim =..4 n x n x To efficiently estimate te entropy in., we use a DE based plug-in estimator. Bias at te boundaries turns out to be very important in tis problem; see ] for a discussion of boundary bias. To correct te boundary effects, we use te following mirror image kernel density estimator: { n x x i x x i x x i p x, x := n x x i i= x x i x x i x x i x x i x x i x x i x x i x x i x x i x x i x x i x x i x x i x x i }..5 Here is te bandwidt and is a univariate kernel function. We denote by u, v := uv te bivariate product kernel. Tis estimator as nine terms; one corresponds to te original data in te unit square, ], and eac of te remaining terms corresponds to reflecting te data across one of te four sides or four corners of te square.

3 Assumption.3 ernel assumption. Te kernel is nonnegative and as a bounded support, ] wit udu = and uudu =. By Assumption., te values of te true density lie in te interval κ, κ ]. We propose a clipped DE estimator p x = T κ,κ p x,.6 were T κ,κ a = κ Ia < κ a Iκ a κ κ Ia > κ, so tat te estimated density also as tis property. Letting gu = u log u, we propose te following plug-in entropy estimator: H p := g p x dx = p x log p xdx..7 Remark.. Te clipped estimator p requires te knowledge of κ and κ. In applications, we do not need to know te exact values of κ and κ ; lower and upper bounds are sufficient. Our main tecnical result is te following exponential concentration inequality on H p around te population quantity Hp. Our proof is given in Section 3. Teorem.. Under Assumptions.,., and.3, if we coose te bandwidt according to n /4, ten tere exists a constant N suc tat for all n > N, sup P p Σ κ,l were κ = max { log κ, log κ }. H p H p > ɛ exp nɛ 36κ,.8 To te best of our knowledge, tis is te first time an exponential inequality like.8 as been establised for Sannon entropy estimation over te Hölder class. It is easy to see tat.8 implies te parametric n-rate of convergence in mean squared error, E Ĥ Hp = On /. Te bandwidt n /4 in te above teorem is different from te usual coice for optimal bivariate density estimation, wic is P n /6 for te nd-order Hölder class. By using n /4, we undersmoot te density estimate. As we sow in te next section, suc a bandwidt coice is important for acieving te optimal rate for entropy estimation. Let Ip := I ; be te Sannon mutual information, and define p x, x I p := p x, x log dx dx..9 p x p x Te next corollary provides an exponential inequality for Sannon mutual information estimation. Corollary.. Under te same conditions as in Teorem., if we coose n /4, ten tere exists a constant N, suc tat for all n > N, sup P I p I p > ɛ 6 exp nɛ p Σ κ,l 34κ,. were κ = max { log κ, log κ }. Proof. Using te same proof for Teorem., we can sow tat.8 also olds for estimating univariate entropies H and H. Te desired result ten follows from te union bound since Ip := I ; = H H H,. Remark.. We use te same bandwidt n /4 to estimate te bivariate density px, x and univariate densities px, px. A related result is presented in 5]. Tey consider te same problem setting as ours and also use a DE based plug-in estimator to estimate te mutual information. However, unlike our proposal, tey advocate te use of different bandwidts for bivariate and univariate entropy estimations. For bivariate case tey use n /6 ; for univariate case tey use n /5. Suc bandwidts and are useful for optimally estimating te density functions. However, suc a coice acieves a suboptimal rate in terms of mutual information estimation: sup p Σκ,L P I p I p > ɛ c exp c n /3 ɛ, were c and c are two constants. Our metod acieves te faster parametric rate. 3

4 3 Teoretical Analysis Here we present te detailed proof of Teorem.. To analyze te error H p H p, we first decompose it into a bias or approximation error term, and a variance or estimation error term: H p H p H p EH p EH p H p. 3. }{{}}{{} Variance Bias We are going to sow tat sup P H p EH p > ɛ p Σ κ,l }{{} Variance sup p Σ κ,l EH p H p }{{} Bias exp nɛ 3κ, 3. c c 3 n, 3.3 were c and c 3 are two constants. Since te bound on te variance in 3. does not depend on, to optimize te rate, we only need to coose to minimize te rigtand side of 3.3. Terefore n /4 acieves te optimal rate. In te rest of tis section, we bound te bias and variance terms separately. 3. Analyzing te Bias Term Here we prove 3.3. Let u be a vector. We denote te sup norm by u. Te next lemma bounds te integrated squared bias of te kernel density estimator over te support :=, ]. Lemma 3.. Under Assumptions.,., and.3, tere exists a constant c > suc tat E p x px dx c sup p Σ κ,l Proof. We partition te support :=, ] into tree regions = B C I, te boundary area B, te corner area C, and te interior area I: C = {x : x u for u =, T, or, T, or, T, or, T }, 3.5 B = {x : x is witin distance to an edge of, but does not belong to C}, 3.6 I = \ C B. 3.7 We ave te following decomposition: E p x px dx = I C E p x px dx = T I T C T B. B From standard results on kernel density estimation, we know tat sup p Σ,L T I c 4. In te next two subsections, we bound T B := E p x px dx and T C := E p x px dx. 3.. Analyzing T B B Let A := {x : x and x }. We ave T B = E p x px dx c E p x px dx. 3.8 For x A, we ave p x = n n i= Terefore, for x A we ave B x x i E p x = A x x i x t 4 C x x i x t pt, t dt dt x x i ]. 3.9

5 = x x x t x t pt, t dt dt u u px u, x u du du Since p Σ κ, L and < x, we ave u u px u, x u du du. 3. px u, x u px, x px, u L u, px u, x u px, x px x u px u L u u x x ]. Since u, u, we ave px u, x u px, x px x px x L u. Similarly, px u, x u px, x 9 px x px x L. For any x A, we can bound te bias term E p x px 3. = E p x u u pt, t du du 3. x x u u px u, x u px, x du du 3.3 px x L L = 4L, were te last inequality follows from te fact tat u u p u x, x u px, x du du 3.4 px x L px x, px L, by te Hölder condition and te assumption tat te density px as vanising partial derivatives on te boundary points. Terefore, we ave T B c Analyzing T C Let A := {x : x, x }. We now analyze te term T C : T C = E p x px dx c E p x px dx. 3.5 C A For notational simplicity, we write x a U x, a, b = x x b. 3.6 For x A, we ave p x = n Ux, n x i, x i U x, x i, x i U x, x i, x i U x, x i, x i ]. 3.7 i= Terefore, for x A we ave E p x = U x, t, t U x, t, t U x, t, t U x, t, t ] pt, t dt dt 5

6 = x x x x x x x u u px u, x u du du 3.8 u u pu x, x u du du 3.9 u u pu x, x u du du 3. u u p x u, x u du du. 3. x Since is a symmetric kernel on, ], we ave x x x u u du du = x u u du du = Terefore, for x = x, x T A, px, x = x x x x x x x x x x u u du du, 3. x u u du du. 3.3 px, x u u du du. x Using te fact tat p Σ κ, L, x, x, and u, u, we ave px u, x u px, x 4L, 3.4 pu x, x u px, x L, 3.5 pu x, u x px, x L, 3.6 pu x, u x px, x 36L. 3.7 For x A, we can ten bound te bias term as E p x px 3.8 = E p x u u pt, t du du 3.9 x x x x x x u u px u, x u px, x du du 3.3 u u pu x, x u px, x du du 3.3 u u pu x, u x px, x du du 3.3 x x u u pu x, u x px, x du du L Terefore, we ave T C c 6. Combining te analysis of T B, T C, and T I, we sow tat te mirror image kernel density estimator is free of boundary bias. Tus te desired result of Lemma 3. is proved Analyzing te Bias of te Entropy Estimator Lemma 3.. Under Assumptions.,., and.3, tere exists a universal constant C tat does not depend on te true density p, suc tat sup EH p Hp C n p Σ κ,l 6

7 Proof. Recalling tat gu = u log u, by Taylor s teorem we ave g p x g px = logpx ] p x px, ξx p x px] 3.36 were ξx lies in between p x and px. It is obvious tat κ ξx κ. Let κ be as defined in te statement of te teorem. Using Fubini s teorem, Hölder s inequality and te fact tat te Lebesgue measure of is, we ave EH p Hp 3.37 = E g p x g px ] dx 3.38 = E g p x g px ] dx 3.39 ] dx logpx E p x px dx ξx E p x px] ] dx dx κ E p x px E p x px] 3.4 κ ] dx dx. κ E p x px E p x px] 3.4 κ c c 4 c 3 n. 3.4 Te last inequality follows from standard results of kernel density estimation and Lemma 3., were c, c, c 3 are tree constants. We get te desired result by setting n /4. 3. Analyzing te Variance Term Lemma 3.3. Under Assumptions.,., and.3, we ave, sup P H p EH p > ɛ exp nɛ p Σ κ,l 3κ Proof. Let p x be te kernel density estimator defined as in.6 but wit te jt data point x j replaced by an arbitrary value x j. Since g u = log u, by Assumption., we ave max g p x, g p x ] κ. For notational simplicity, we write te product kernel as =. Using te mean-value teorem and te fact tat T κ,κ is a contraction, we ave sup H p H p 3.44 x,...,x n,x j = sup g p x g p x ] dx 3.45 x,...,x n,x j κ sup p x p x dx 3.46 x,...,x n,x j = κ sup T κ,κ p x] T κ,κ p x] dx 3.47 x,...,x n,x j 4κ sup x j x,...,x n,x j n x x j n x dx 3.48 y x 8κ sup y n dx κ udu = 8κ n n. 3.5 Terefore, using McDiarmaid s inequality 6], we get te desired inequality Te uniformity result olds since te constant does not depend on te true density p. 7

8 4 Application to Forest Density estimation We apply te concentration inequality. to analyze an algoritm for learning ig dimensional forest grap models 5]. In a forest density estimation problem, we observe n data points x,..., x n R d from a d-dimensional random vector. We ave two learning tasks: i we want to estimate an acyclic undirected grap F = V, E, were V is te vertex set containing all te random variables and E is te edge set suc tat an edge j, k E if and only if te corresponding random variables j and k are conditionally independent given te oter variables \{j,k} ; ii once we ave an estimated grap F, we want to estimate te density function px. Using te negative log-likeliood loss, Liu et al. 5] sow tat te grap estimation problem can be recast as te problem of finding te maximum weigt spanning forest for a weigted grap, were te weigt of te edge connecting nodes j and k is I j ; k, te mutual information between tese two variables. Empirically, we replace I j ; k by its estimate Î j; k from.9. Te forest grap can be obtained by te Cow-Liu algoritm 3, 3], wic is an iterative algoritm. At eac iteration te algoritm adds an edge connecting tat pair of variables wit maximum mutual information among all pairs not yet visited by te algoritm, if doing so does not form a cycle. Wen stopped early, after s < d edges ave been added, it yields te best s-edge weigted forest. Once a forest grap F = V, Ê is estimated, we propose to estimate te forest density as p F x = p x j, x k p x j p x k p x u p x l, 4. j,k Ê u Û l V \Û were Û is te set of isolated vertices in te estimated forest F. Our estimator is different from te estimator proposed by 5] once te grap F is given, we treat te isolated variables differently tan te connected variables. As will be sown in Teorem 4., suc a coice leads to minimax optimal forest density estimation, wile te obtained rate from 5] is suboptimal. Let Fd s denote te set of forest graps wit d nodes and no more tan s edges. Let D be te ullback-leibler divergence. We define te s-oracle forest Fs := V, E and its corresponding oracle density estimator p F to be Fs = arg min Dp p F and p F := px j, x k px l. 4. F Fd s px j, x k j,k E l V Let Σ κ, L be defined as in Assumption.. We define a density class P κ as P κ := { p : p is a d-dimensional density wit px j, x k Σ κ, L for any j k }. 4.3 Te next two teorems sow tat te above forest density estimation procedure is minimax optimal for bot grap recovery and density estimation. Teir proofs are provided in a tecnical report 4]. Teorem 4. Grap Recovery. Let F be te estimated s-edge forest grap using te Cow-Liu algoritm. Under te same condition as Teorem in 5], If we coose n /4 for te mutual information estimator in.9, ten sup p P κ P F F s = O s n wenever log d n. 4.4 Teorem 4. Density Estimation. Once te s-edge forest grap F as in Teorem 4. as been obtained, we calculate te density estimator B. by coosing n /5 and n /6. Ten, sup E p F x p F x s dx C p P κ n d s. 4.5 /3 n4/5 5 Discussions and Conclusions Teorem 4. allows d to increase exponentially fast as n increases and still guarantees grap recovery consistency. Teorem 4. provides te rate of convergence for te L -risk. Te obtained rate is minimax optimal over te class P κ. Te term sn /3 corresponds to te price paid to estimate bivariate densities; wile te term d sn 4/5 corresponds to te price paid to estimate univariate densities. In tis way, we see tat te exponential concentration inequality for Sannon mutual information leads to significantly improved teoretical analysis of te forest density estimation, in terms of bot grap estimation and density estimation. Tis researc was supported by NSF grant IIS-673 and AFOSR contract FA

9 References ] Ibraim A. Amad and Pi-Er Lin. A nonparametric estimation of te entropy for absolutely continuous distributions corresp.. IEEE Transactions on Information Teory, 3:37 375, 976. ] J Beirlant, E J Dudewicz, L Györfi, and E C Van Der Meulen. Nonparametric entropy estimation: An overview. International Journal of Matematical and Statistical Sciences, 6:7 39, ] C. Cow and C. Liu. Approximating discrete probability distributions wit dependence trees. Information Teory, IEEE Transactions on, 43:46 467, ] Tomas M. Cover and Joy A. Tomas. Elements of Information Teory. Wiley, 99. 5] Paul P. B. Eggermont and Vincent N. LaRiccia. Best asymptotic normality of te kernel density entropy estimator for smoot densities. IEEE Transactions on Information Teory, 454:3 36, ] A. Gretton, R. Herbric, and A. J. Smola. Te kernel mutual information. In Acoustics, Speec, and Signal Processing, 3. Proceedings.ICASSP 3. 3 IEEE International Conference on, volume 4, pages IV 88. IEEE, 3. 7] Peter Hall and Sally Morton. On te estimation of entropy. Annals of te Institute of Statistical Matematics, 45:69 88, ] A. O. Hero III, B. Ma, O. J. J. Micel, and J. Gorman. Applications of entropic spanning graps. Signal Processing Magazine, IEEE, 95:85 95,. 9] Harry Joe. Estimation of entropy and oter functionals of a multivariate density. Annals of te Institute of Statistical Matematics, 44: , December 989. ] M. C. Jones, M. C. Linton, and J. P. Nielsen. A simple bias reduction metod for density estimation. Biometrika, 8:37 338, 995. ] Siraj an, Sarba Bandyopadyay, Auroop R. Ganguly, Sunil Saigal, David J. Erickson, Vladimir Protopopescu, and George Ostroucov. Relative performance of mutual information estimation metods for quantifying te dependence among sort and noisy data. Pys. Rev. E, 76:69, Aug 7. ] Alexander raskov, Harald Stögbauer, and Peter Grassberger. Estimating mutual information. Pysical review. E, Statistical, nonlinear, and soft matter pysics, 696 Pt, June 4. 3] Josep B. ruskal. On te sortest spanning subtree of a grap and te traveling salesman problem. Proceedings of te American Matematical Society, 7:48 5, ] Han Liu, Jon Lafferty, and Larry Wasserman. Optimal forest density estimation. Tecnical Report,. 5] Han Liu, Min u, Haijie Gu, Anupam Gupta, Jon D. Lafferty, and Larry A. Wasserman. Forest density estimation. Journal of Macine Learning Researc, :97 95,. 6] C. McDiarmid. On te metod of bounded differences. In Surveys in Combinatorics, number 4 in London Matematical Society Lecture Note Series, pages Cambridge University Press, August ] uanlong Nguyen, Martin J. Wainwrigt, and Micael I. Jordan. Estimating divergence functionals and te likeliood ratio by convex risk minimization. IEEE Transactions on Information Teory, 56: ,. 8] D. Pál, B. Póczos, and C. Szepesvári. Estimation of Rényi entropy and mutual information based on generalized nearest-neigbor graps. Arxiv preprint ariv:3.954,. 9] L. Paninski. Estimation of entropy and mutual information. Neural Computation, 56:9 53, 3. ] Liam Paninski and Masanao Yajima. Undersmooted kernel entropy estimators. IEEE Transactions on Information Teory, 549: , 8. ] Barnabás Póczos and Jeff G. Scneider. Nonparametric estimation of conditional information and divergences. Journal of Macine Learning Researc - Proceedings Track, :94 93,. ] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Capman and Hall. New York, NY, ] A.B. Tsybakov and van den Meulen. Root-n Consistent Estimators of Entropy for Densities wit Unbounded Support, volume 3. Universite catolique de Louvain,Institut de statistique, ] Marc M. Van Hulle. Edgewort approximation of multivariate differential entropy. Neural Comput., 79:93 9, September 5. 5] O Vasicek. A test for normality based on sample entropy. Journal of te Royal Statistical Society Series B, 38:54 59, ] Ven Es Bert. Estimating functionals related to a density by a class of statistics based on spacings. Scandinavian Journal of Statistics, 9:6 7, 99. 9

Homework 1 Due: Wednesday, September 28, 2016

Homework 1 Due: Wednesday, September 28, 2016 0-704 Information Processing and Learning Fall 06 Homework Due: Wednesday, September 8, 06 Notes: For positive integers k, [k] := {,..., k} denotes te set of te first k positive integers. Wen p and Y q

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

Kernel Density Based Linear Regression Estimate

Kernel Density Based Linear Regression Estimate Kernel Density Based Linear Regression Estimate Weixin Yao and Zibiao Zao Abstract For linear regression models wit non-normally distributed errors, te least squares estimate (LSE will lose some efficiency

More information

The Priestley-Chao Estimator

The Priestley-Chao Estimator Te Priestley-Cao Estimator In tis section we will consider te Pristley-Cao estimator of te unknown regression function. It is assumed tat we ave a sample of observations (Y i, x i ), i = 1,..., n wic are

More information

Kernel Density Estimation

Kernel Density Estimation Kernel Density Estimation Univariate Density Estimation Suppose tat we ave a random sample of data X 1,..., X n from an unknown continuous distribution wit probability density function (pdf) f(x) and cumulative

More information

Generalized Exponential Concentration Inequality for Rényi Divergence Estimation

Generalized Exponential Concentration Inequality for Rényi Divergence Estimation Generalized Exponential Concentration Inequality for Rényi Divergence Estimation Sasank Sing Carnegie Mellon University, 5000 Forbes Ave., Pittsburg, PA 523 USA Barnabás Póczos Carnegie Mellon University,

More information

Fast optimal bandwidth selection for kernel density estimation

Fast optimal bandwidth selection for kernel density estimation Fast optimal bandwidt selection for kernel density estimation Vikas Candrakant Raykar and Ramani Duraiswami Dept of computer science and UMIACS, University of Maryland, CollegePark {vikas,ramani}@csumdedu

More information

Applications of the van Trees inequality to non-parametric estimation.

Applications of the van Trees inequality to non-parametric estimation. Brno-06, Lecture 2, 16.05.06 D/Stat/Brno-06/2.tex www.mast.queensu.ca/ blevit/ Applications of te van Trees inequality to non-parametric estimation. Regular non-parametric problems. As an example of suc

More information

Order of Accuracy. ũ h u Ch p, (1)

Order of Accuracy. ũ h u Ch p, (1) Order of Accuracy 1 Terminology We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, wic can be for instance te grid size or time step in a numerical

More information

New families of estimators and test statistics in log-linear models

New families of estimators and test statistics in log-linear models Journal of Multivariate Analysis 99 008 1590 1609 www.elsevier.com/locate/jmva ew families of estimators and test statistics in log-linear models irian Martín a,, Leandro Pardo b a Department of Statistics

More information

7 Semiparametric Methods and Partially Linear Regression

7 Semiparametric Methods and Partially Linear Regression 7 Semiparametric Metods and Partially Linear Regression 7. Overview A model is called semiparametric if it is described by and were is nite-dimensional (e.g. parametric) and is in nite-dimensional (nonparametric).

More information

Bandwidth Selection in Nonparametric Kernel Testing

Bandwidth Selection in Nonparametric Kernel Testing Te University of Adelaide Scool of Economics Researc Paper No. 2009-0 January 2009 Bandwidt Selection in Nonparametric ernel Testing Jiti Gao and Irene Gijbels Bandwidt Selection in Nonparametric ernel

More information

Regularized Regression

Regularized Regression Regularized Regression David M. Blei Columbia University December 5, 205 Modern regression problems are ig dimensional, wic means tat te number of covariates p is large. In practice statisticians regularize

More information

A = h w (1) Error Analysis Physics 141

A = h w (1) Error Analysis Physics 141 Introduction In all brances of pysical science and engineering one deals constantly wit numbers wic results more or less directly from experimental observations. Experimental observations always ave inaccuracies.

More information

232 Calculus and Structures

232 Calculus and Structures 3 Calculus and Structures CHAPTER 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS FOR EVALUATING BEAMS Calculus and Structures 33 Copyrigt Capter 17 JUSTIFICATION OF THE AREA AND SLOPE METHODS 17.1 THE

More information

Logistic Kernel Estimator and Bandwidth Selection. for Density Function

Logistic Kernel Estimator and Bandwidth Selection. for Density Function International Journal of Contemporary Matematical Sciences Vol. 13, 2018, no. 6, 279-286 HIKARI Ltd, www.m-ikari.com ttps://doi.org/10.12988/ijcms.2018.81133 Logistic Kernel Estimator and Bandwidt Selection

More information

Poisson Equation in Sobolev Spaces

Poisson Equation in Sobolev Spaces Poisson Equation in Sobolev Spaces OcMountain Dayligt Time. 6, 011 Today we discuss te Poisson equation in Sobolev spaces. It s existence, uniqueness, and regularity. Weak Solution. u = f in, u = g on

More information

Basic Nonparametric Estimation Spring 2002

Basic Nonparametric Estimation Spring 2002 Basic Nonparametric Estimation Spring 2002 Te following topics are covered today: Basic Nonparametric Regression. Tere are four books tat you can find reference: Silverman986, Wand and Jones995, Hardle990,

More information

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households

Volume 29, Issue 3. Existence of competitive equilibrium in economies with multi-member households Volume 29, Issue 3 Existence of competitive equilibrium in economies wit multi-member ouseolds Noriisa Sato Graduate Scool of Economics, Waseda University Abstract Tis paper focuses on te existence of

More information

Boosting Kernel Density Estimates: a Bias Reduction. Technique?

Boosting Kernel Density Estimates: a Bias Reduction. Technique? Boosting Kernel Density Estimates: a Bias Reduction Tecnique? Marco Di Marzio Dipartimento di Metodi Quantitativi e Teoria Economica, Università di Cieti-Pescara, Viale Pindaro 42, 65127 Pescara, Italy

More information

ON RENYI S ENTROPY ESTIMATION WITH ONE-DIMENSIONAL GAUSSIAN KERNELS. Septimia Sarbu

ON RENYI S ENTROPY ESTIMATION WITH ONE-DIMENSIONAL GAUSSIAN KERNELS. Septimia Sarbu ON RENYI S ENTROPY ESTIMATION WITH ONE-DIMENSIONAL GAUSSIAN KERNELS Septimia Sarbu Department of Signal Processing Tampere University of Tecnology PO Bo 527 FI-330 Tampere, Finland septimia.sarbu@tut.fi

More information

Preconditioning in H(div) and Applications

Preconditioning in H(div) and Applications 1 Preconditioning in H(div) and Applications Douglas N. Arnold 1, Ricard S. Falk 2 and Ragnar Winter 3 4 Abstract. Summarizing te work of [AFW97], we sow ow to construct preconditioners using domain decomposition

More information

OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix

OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS. Sandra Pinelas and Julio G. Dix Opuscula Mat. 37, no. 6 (2017), 887 898 ttp://dx.doi.org/10.7494/opmat.2017.37.6.887 Opuscula Matematica OSCILLATION OF SOLUTIONS TO NON-LINEAR DIFFERENCE EQUATIONS WITH SEVERAL ADVANCED ARGUMENTS Sandra

More information

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER*

ERROR BOUNDS FOR THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BRADLEY J. LUCIER* EO BOUNDS FO THE METHODS OF GLIMM, GODUNOV AND LEVEQUE BADLEY J. LUCIE* Abstract. Te expected error in L ) attimet for Glimm s sceme wen applied to a scalar conservation law is bounded by + 2 ) ) /2 T

More information

An approximation method using approximate approximations

An approximation method using approximate approximations Applicable Analysis: An International Journal Vol. 00, No. 00, September 2005, 1 13 An approximation metod using approximate approximations FRANK MÜLLER and WERNER VARNHORN, University of Kassel, Germany,

More information

Function Composition and Chain Rules

Function Composition and Chain Rules Function Composition and s James K. Peterson Department of Biological Sciences and Department of Matematical Sciences Clemson University Marc 8, 2017 Outline 1 Function Composition and Continuity 2 Function

More information

ch (for some fixed positive number c) reaching c

ch (for some fixed positive number c) reaching c GSTF Journal of Matematics Statistics and Operations Researc (JMSOR) Vol. No. September 05 DOI 0.60/s4086-05-000-z Nonlinear Piecewise-defined Difference Equations wit Reciprocal and Cubic Terms Ramadan

More information

Online Learning: Bandit Setting

Online Learning: Bandit Setting Online Learning: Bandit Setting Daniel asabi Summer 04 Last Update: October 0, 06 Introduction [TODO Bandits. Stocastic setting Suppose tere exists unknown distributions ν,..., ν, suc tat te loss at eac

More information

MANY scientific and engineering problems can be

MANY scientific and engineering problems can be A Domain Decomposition Metod using Elliptical Arc Artificial Boundary for Exterior Problems Yajun Cen, and Qikui Du Abstract In tis paper, a Diriclet-Neumann alternating metod using elliptical arc artificial

More information

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist

1. Questions (a) through (e) refer to the graph of the function f given below. (A) 0 (B) 1 (C) 2 (D) 4 (E) does not exist Mat 1120 Calculus Test 2. October 18, 2001 Your name Te multiple coice problems count 4 points eac. In te multiple coice section, circle te correct coice (or coices). You must sow your work on te oter

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximatinga function fx, wose values at a set of distinct points x, x, x,, x n are known, by a polynomial P x suc

More information

Bootstrap confidence intervals in nonparametric regression without an additive model

Bootstrap confidence intervals in nonparametric regression without an additive model Bootstrap confidence intervals in nonparametric regression witout an additive model Dimitris N. Politis Abstract Te problem of confidence interval construction in nonparametric regression via te bootstrap

More information

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation

A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation A Jump-Preserving Curve Fitting Procedure Based On Local Piecewise-Linear Kernel Estimation Peiua Qiu Scool of Statistics University of Minnesota 313 Ford Hall 224 Curc St SE Minneapolis, MN 55455 Abstract

More information

MATH745 Fall MATH745 Fall

MATH745 Fall MATH745 Fall MATH745 Fall 5 MATH745 Fall 5 INTRODUCTION WELCOME TO MATH 745 TOPICS IN NUMERICAL ANALYSIS Instructor: Dr Bartosz Protas Department of Matematics & Statistics Email: bprotas@mcmasterca Office HH 36, Ext

More information

University Mathematics 2

University Mathematics 2 University Matematics 2 1 Differentiability In tis section, we discuss te differentiability of functions. Definition 1.1 Differentiable function). Let f) be a function. We say tat f is differentiable at

More information

Differentiation in higher dimensions

Differentiation in higher dimensions Capter 2 Differentiation in iger dimensions 2.1 Te Total Derivative Recall tat if f : R R is a 1-variable function, and a R, we say tat f is differentiable at x = a if and only if te ratio f(a+) f(a) tends

More information

Local Orthogonal Polynomial Expansion (LOrPE) for Density Estimation

Local Orthogonal Polynomial Expansion (LOrPE) for Density Estimation Local Ortogonal Polynomial Expansion (LOrPE) for Density Estimation Alex Trindade Dept. of Matematics & Statistics, Texas Tec University Igor Volobouev, Texas Tec University (Pysics Dept.) D.P. Amali Dassanayake,

More information

Fast Exact Univariate Kernel Density Estimation

Fast Exact Univariate Kernel Density Estimation Fast Exact Univariate Kernel Density Estimation David P. Hofmeyr Department of Statistics and Actuarial Science, Stellenbosc University arxiv:1806.00690v2 [stat.co] 12 Jul 2018 July 13, 2018 Abstract Tis

More information

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error.

AMS 147 Computational Methods and Applications Lecture 09 Copyright by Hongyun Wang, UCSC. Exact value. Effect of round-off error. Lecture 09 Copyrigt by Hongyun Wang, UCSC Recap: Te total error in numerical differentiation fl( f ( x + fl( f ( x E T ( = f ( x Numerical result from a computer Exact value = e + f x+ Discretization error

More information

1 Introduction to Optimization

1 Introduction to Optimization Unconstrained Convex Optimization 2 1 Introduction to Optimization Given a general optimization problem of te form min x f(x) (1.1) were f : R n R. Sometimes te problem as constraints (we are only interested

More information

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u

More information

Exam 1 Review Solutions

Exam 1 Review Solutions Exam Review Solutions Please also review te old quizzes, and be sure tat you understand te omework problems. General notes: () Always give an algebraic reason for your answer (graps are not sufficient),

More information

Continuity and Differentiability of the Trigonometric Functions

Continuity and Differentiability of the Trigonometric Functions [Te basis for te following work will be te definition of te trigonometric functions as ratios of te sides of a triangle inscribed in a circle; in particular, te sine of an angle will be defined to be te

More information

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems

Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for Second-Order Elliptic Problems Applied Matematics, 06, 7, 74-8 ttp://wwwscirporg/journal/am ISSN Online: 5-7393 ISSN Print: 5-7385 Numerical Experiments Using MATLAB: Superconvergence of Nonconforming Finite Element Approximation for

More information

2.3 Algebraic approach to limits

2.3 Algebraic approach to limits CHAPTER 2. LIMITS 32 2.3 Algebraic approac to its Now we start to learn ow to find its algebraically. Tis starts wit te simplest possible its, and ten builds tese up to more complicated examples. Fact.

More information

Brazilian Journal of Physics, vol. 29, no. 1, March, Ensemble and their Parameter Dierentiation. A. K. Rajagopal. Naval Research Laboratory,

Brazilian Journal of Physics, vol. 29, no. 1, March, Ensemble and their Parameter Dierentiation. A. K. Rajagopal. Naval Research Laboratory, Brazilian Journal of Pysics, vol. 29, no. 1, Marc, 1999 61 Fractional Powers of Operators of sallis Ensemble and teir Parameter Dierentiation A. K. Rajagopal Naval Researc Laboratory, Wasington D. C. 2375-532,

More information

Artificial Neural Network Model Based Estimation of Finite Population Total

Artificial Neural Network Model Based Estimation of Finite Population Total International Journal of Science and Researc (IJSR), India Online ISSN: 2319-7064 Artificial Neural Network Model Based Estimation of Finite Population Total Robert Kasisi 1, Romanus O. Odiambo 2, Antony

More information

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t).

1. Consider the trigonometric function f(t) whose graph is shown below. Write down a possible formula for f(t). . Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd, periodic function tat as been sifted upwards, so we will use

More information

Chapter 5 FINITE DIFFERENCE METHOD (FDM)

Chapter 5 FINITE DIFFERENCE METHOD (FDM) MEE7 Computer Modeling Tecniques in Engineering Capter 5 FINITE DIFFERENCE METHOD (FDM) 5. Introduction to FDM Te finite difference tecniques are based upon approximations wic permit replacing differential

More information

4.2 - Richardson Extrapolation

4.2 - Richardson Extrapolation . - Ricardson Extrapolation. Small-O Notation: Recall tat te big-o notation used to define te rate of convergence in Section.: Definition Let x n n converge to a number x. Suppose tat n n is a sequence

More information

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4.

Solution. Solution. f (x) = (cos x)2 cos(2x) 2 sin(2x) 2 cos x ( sin x) (cos x) 4. f (π/4) = ( 2/2) ( 2/2) ( 2/2) ( 2/2) 4. December 09, 20 Calculus PracticeTest s Name: (4 points) Find te absolute extrema of f(x) = x 3 0 on te interval [0, 4] Te derivative of f(x) is f (x) = 3x 2, wic is zero only at x = 0 Tus we only need

More information

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx.

Consider a function f we ll specify which assumptions we need to make about it in a minute. Let us reformulate the integral. 1 f(x) dx. Capter 2 Integrals as sums and derivatives as differences We now switc to te simplest metods for integrating or differentiating a function from its function samples. A careful study of Taylor expansions

More information

A SHORT INTRODUCTION TO BANACH LATTICES AND

A SHORT INTRODUCTION TO BANACH LATTICES AND CHAPTER A SHORT INTRODUCTION TO BANACH LATTICES AND POSITIVE OPERATORS In tis capter we give a brief introduction to Banac lattices and positive operators. Most results of tis capter can be found, e.g.,

More information

Nonparametric Estimation of Rényi Divergence and Friends

Nonparametric Estimation of Rényi Divergence and Friends Nonparametric Estimation of Rényi Divergence and Friends Aksay Krisnamurty Kirtevasan Kandasamy Barnabás Póczos Larry Wasserman Carnegie Mellon University, 5000 Forbes Avenue, Pittsburg PA 15213 AKSHAYKR@CS.CMU.EDU

More information

Numerical Differentiation

Numerical Differentiation Numerical Differentiation Finite Difference Formulas for te first derivative (Using Taylor Expansion tecnique) (section 8.3.) Suppose tat f() = g() is a function of te variable, and tat as 0 te function

More information

Symmetry Labeling of Molecular Energies

Symmetry Labeling of Molecular Energies Capter 7. Symmetry Labeling of Molecular Energies Notes: Most of te material presented in tis capter is taken from Bunker and Jensen 1998, Cap. 6, and Bunker and Jensen 2005, Cap. 7. 7.1 Hamiltonian Symmetry

More information

Handling Missing Data on Asymmetric Distribution

Handling Missing Data on Asymmetric Distribution International Matematical Forum, Vol. 8, 03, no. 4, 53-65 Handling Missing Data on Asymmetric Distribution Amad M. H. Al-Kazale Department of Matematics, Faculty of Science Al-albayt University, Al-Mafraq-Jordan

More information

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys

On Local Linear Regression Estimation of Finite Population Totals in Model Based Surveys American Journal of Teoretical and Applied Statistics 2018; 7(3): 92-101 ttp://www.sciencepublisinggroup.com/j/ajtas doi: 10.11648/j.ajtas.20180703.11 ISSN: 2326-8999 (Print); ISSN: 2326-9006 (Online)

More information

The Derivative as a Function

The Derivative as a Function Section 2.2 Te Derivative as a Function 200 Kiryl Tsiscanka Te Derivative as a Function DEFINITION: Te derivative of a function f at a number a, denoted by f (a), is if tis limit exists. f (a) f(a + )

More information

On the Identifiability of the Post-Nonlinear Causal Model

On the Identifiability of the Post-Nonlinear Causal Model UAI 9 ZHANG & HYVARINEN 647 On te Identifiability of te Post-Nonlinear Causal Model Kun Zang Dept. of Computer Science and HIIT University of Helsinki Finland Aapo Hyvärinen Dept. of Computer Science,

More information

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim

Math Spring 2013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, (1/z) 2 (1/z 1) 2 = lim Mat 311 - Spring 013 Solutions to Assignment # 3 Completion Date: Wednesday May 15, 013 Question 1. [p 56, #10 (a)] 4z Use te teorem of Sec. 17 to sow tat z (z 1) = 4. We ave z 4z (z 1) = z 0 4 (1/z) (1/z

More information

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h

Lecture 21. Numerical differentiation. f ( x+h) f ( x) h h Lecture Numerical differentiation Introduction We can analytically calculate te derivative of any elementary function, so tere migt seem to be no motivation for calculating derivatives numerically. However

More information

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a?

(a) At what number x = a does f have a removable discontinuity? What value f(a) should be assigned to f at x = a in order to make f continuous at a? Solutions to Test 1 Fall 016 1pt 1. Te grap of a function f(x) is sown at rigt below. Part I. State te value of eac limit. If a limit is infinite, state weter it is or. If a limit does not exist (but is

More information

Hazard Rate Function Estimation Using Erlang Kernel

Hazard Rate Function Estimation Using Erlang Kernel Pure Matematical Sciences, Vol. 3, 04, no. 4, 4-5 HIKARI Ltd, www.m-ikari.com ttp://dx.doi.org/0.988/pms.04.466 Hazard Rate Function Estimation Using Erlang Kernel Raid B. Sala Department of Matematics

More information

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these.

4. The slope of the line 2x 7y = 8 is (a) 2/7 (b) 7/2 (c) 2 (d) 2/7 (e) None of these. Mat 11. Test Form N Fall 016 Name. Instructions. Te first eleven problems are wort points eac. Te last six problems are wort 5 points eac. For te last six problems, you must use relevant metods of algebra

More information

lecture 26: Richardson extrapolation

lecture 26: Richardson extrapolation 43 lecture 26: Ricardson extrapolation 35 Ricardson extrapolation, Romberg integration Trougout numerical analysis, one encounters procedures tat apply some simple approximation (eg, linear interpolation)

More information

LECTURE 14 NUMERICAL INTEGRATION. Find

LECTURE 14 NUMERICAL INTEGRATION. Find LECTURE 14 NUMERCAL NTEGRATON Find b a fxdx or b a vx ux fx ydy dx Often integration is required. However te form of fx may be suc tat analytical integration would be very difficult or impossible. Use

More information

Subdifferentials of convex functions

Subdifferentials of convex functions Subdifferentials of convex functions Jordan Bell jordan.bell@gmail.com Department of Matematics, University of Toronto April 21, 2014 Wenever we speak about a vector space in tis note we mean a vector

More information

THE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS. L. Trautmann, R. Rabenstein

THE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS. L. Trautmann, R. Rabenstein Worksop on Transforms and Filter Banks (WTFB),Brandenburg, Germany, Marc 999 THE STURM-LIOUVILLE-TRANSFORMATION FOR THE SOLUTION OF VECTOR PARTIAL DIFFERENTIAL EQUATIONS L. Trautmann, R. Rabenstein Lerstul

More information

Solutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014

Solutions to the Multivariable Calculus and Linear Algebra problems on the Comprehensive Examination of January 31, 2014 Solutions to te Multivariable Calculus and Linear Algebra problems on te Compreensive Examination of January 3, 24 Tere are 9 problems ( points eac, totaling 9 points) on tis portion of te examination.

More information

arxiv: v1 [math.na] 28 Apr 2017

arxiv: v1 [math.na] 28 Apr 2017 THE SCOTT-VOGELIUS FINITE ELEMENTS REVISITED JOHNNY GUZMÁN AND L RIDGWAY SCOTT arxiv:170500020v1 [matna] 28 Apr 2017 Abstract We prove tat te Scott-Vogelius finite elements are inf-sup stable on sape-regular

More information

Continuity and Differentiability Worksheet

Continuity and Differentiability Worksheet Continuity and Differentiability Workseet (Be sure tat you can also do te grapical eercises from te tet- Tese were not included below! Typical problems are like problems -3, p. 6; -3, p. 7; 33-34, p. 7;

More information

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION

LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LIMITATIONS OF EULER S METHOD FOR NUMERICAL INTEGRATION LAURA EVANS.. Introduction Not all differential equations can be explicitly solved for y. Tis can be problematic if we need to know te value of y

More information

Gradient Descent etc.

Gradient Descent etc. 1 Gradient Descent etc EE 13: Networked estimation and control Prof Kan) I DERIVATIVE Consider f : R R x fx) Te derivative is defined as d fx) = lim dx fx + ) fx) Te cain rule states tat if d d f gx) )

More information

The Verlet Algorithm for Molecular Dynamics Simulations

The Verlet Algorithm for Molecular Dynamics Simulations Cemistry 380.37 Fall 2015 Dr. Jean M. Standard November 9, 2015 Te Verlet Algoritm for Molecular Dynamics Simulations Equations of motion For a many-body system consisting of N particles, Newton's classical

More information

Analysis of an SEIR Epidemic Model with a General Feedback Vaccination Law

Analysis of an SEIR Epidemic Model with a General Feedback Vaccination Law Proceedings of te World Congress on Engineering 5 Vol WCE 5 July - 3 5 London U.K. Analysis of an ER Epidemic Model wit a General Feedback Vaccination Law M. De la en. Alonso-Quesada A.beas and R. Nistal

More information

3. THE EXCHANGE ECONOMY

3. THE EXCHANGE ECONOMY Essential Microeconomics -1-3. THE EXCHNGE ECONOMY Pareto efficient allocations 2 Edgewort box analysis 5 Market clearing prices 13 Walrasian Equilibrium 16 Equilibrium and Efficiency 22 First welfare

More information

MIXED DISCONTINUOUS GALERKIN APPROXIMATION OF THE MAXWELL OPERATOR. SIAM J. Numer. Anal., Vol. 42 (2004), pp

MIXED DISCONTINUOUS GALERKIN APPROXIMATION OF THE MAXWELL OPERATOR. SIAM J. Numer. Anal., Vol. 42 (2004), pp MIXED DISCONTINUOUS GALERIN APPROXIMATION OF THE MAXWELL OPERATOR PAUL HOUSTON, ILARIA PERUGIA, AND DOMINI SCHÖTZAU SIAM J. Numer. Anal., Vol. 4 (004), pp. 434 459 Abstract. We introduce and analyze a

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics 1 By Jiti Gao 2 and Maxwell King 3 Abstract We propose a simultaneous model specification procedure for te conditional

More information

Polynomial Interpolation

Polynomial Interpolation Capter 4 Polynomial Interpolation In tis capter, we consider te important problem of approximating a function f(x, wose values at a set of distinct points x, x, x 2,,x n are known, by a polynomial P (x

More information

Long Term Time Series Prediction with Multi-Input Multi-Output Local Learning

Long Term Time Series Prediction with Multi-Input Multi-Output Local Learning Long Term Time Series Prediction wit Multi-Input Multi-Output Local Learning Gianluca Bontempi Macine Learning Group, Département d Informatique Faculté des Sciences, ULB, Université Libre de Bruxelles

More information

Te comparison of dierent models M i is based on teir relative probabilities, wic can be expressed, again using Bayes' teorem, in terms of prior probab

Te comparison of dierent models M i is based on teir relative probabilities, wic can be expressed, again using Bayes' teorem, in terms of prior probab To appear in: Advances in Neural Information Processing Systems 9, eds. M. C. Mozer, M. I. Jordan and T. Petsce. MIT Press, 997 Bayesian Model Comparison by Monte Carlo Caining David Barber D.Barber@aston.ac.uk

More information

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY

POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY APPLICATIONES MATHEMATICAE 36, (29), pp. 2 Zbigniew Ciesielski (Sopot) Ryszard Zieliński (Warszawa) POLYNOMIAL AND SPLINE ESTIMATORS OF THE DISTRIBUTION FUNCTION WITH PRESCRIBED ACCURACY Abstract. Dvoretzky

More information

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT

LIMITS AND DERIVATIVES CONDITIONS FOR THE EXISTENCE OF A LIMIT LIMITS AND DERIVATIVES Te limit of a function is defined as te value of y tat te curve approaces, as x approaces a particular value. Te limit of f (x) as x approaces a is written as f (x) approaces, as

More information

UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING

UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Statistica Sinica 15(2005), 73-98 UNIMODAL KERNEL DENSITY ESTIMATION BY DATA SHARPENING Peter Hall 1 and Kee-Hoon Kang 1,2 1 Australian National University and 2 Hankuk University of Foreign Studies Abstract:

More information

Research Article Error Analysis for a Noisy Lacunary Cubic Spline Interpolation and a Simple Noisy Cubic Spline Quasi Interpolation

Research Article Error Analysis for a Noisy Lacunary Cubic Spline Interpolation and a Simple Noisy Cubic Spline Quasi Interpolation Advances in Numerical Analysis Volume 204, Article ID 35394, 8 pages ttp://dx.doi.org/0.55/204/35394 Researc Article Error Analysis for a Noisy Lacunary Cubic Spline Interpolation and a Simple Noisy Cubic

More information

Continuity. Example 1

Continuity. Example 1 Continuity MATH 1003 Calculus and Linear Algebra (Lecture 13.5) Maoseng Xiong Department of Matematics, HKUST A function f : (a, b) R is continuous at a point c (a, b) if 1. x c f (x) exists, 2. f (c)

More information

Solving Continuous Linear Least-Squares Problems by Iterated Projection

Solving Continuous Linear Least-Squares Problems by Iterated Projection Solving Continuous Linear Least-Squares Problems by Iterated Projection by Ral Juengling Department o Computer Science, Portland State University PO Box 75 Portland, OR 977 USA Email: juenglin@cs.pdx.edu

More information

Average Rate of Change

Average Rate of Change Te Derivative Tis can be tougt of as an attempt to draw a parallel (pysically and metaporically) between a line and a curve, applying te concept of slope to someting tat isn't actually straigt. Te slope

More information

LEAST-SQUARES FINITE ELEMENT APPROXIMATIONS TO SOLUTIONS OF INTERFACE PROBLEMS

LEAST-SQUARES FINITE ELEMENT APPROXIMATIONS TO SOLUTIONS OF INTERFACE PROBLEMS SIAM J. NUMER. ANAL. c 998 Society for Industrial Applied Matematics Vol. 35, No., pp. 393 405, February 998 00 LEAST-SQUARES FINITE ELEMENT APPROXIMATIONS TO SOLUTIONS OF INTERFACE PROBLEMS YANZHAO CAO

More information

Physically Based Modeling: Principles and Practice Implicit Methods for Differential Equations

Physically Based Modeling: Principles and Practice Implicit Methods for Differential Equations Pysically Based Modeling: Principles and Practice Implicit Metods for Differential Equations David Baraff Robotics Institute Carnegie Mellon University Please note: Tis document is 997 by David Baraff

More information

MA455 Manifolds Solutions 1 May 2008

MA455 Manifolds Solutions 1 May 2008 MA455 Manifolds Solutions 1 May 2008 1. (i) Given real numbers a < b, find a diffeomorpism (a, b) R. Solution: For example first map (a, b) to (0, π/2) and ten map (0, π/2) diffeomorpically to R using

More information

Generic maximum nullity of a graph

Generic maximum nullity of a graph Generic maximum nullity of a grap Leslie Hogben Bryan Sader Marc 5, 2008 Abstract For a grap G of order n, te maximum nullity of G is defined to be te largest possible nullity over all real symmetric n

More information

1 Lecture 13: The derivative as a function.

1 Lecture 13: The derivative as a function. 1 Lecture 13: Te erivative as a function. 1.1 Outline Definition of te erivative as a function. efinitions of ifferentiability. Power rule, erivative te exponential function Derivative of a sum an a multiple

More information

Robotic manipulation project

Robotic manipulation project Robotic manipulation project Bin Nguyen December 5, 2006 Abstract Tis is te draft report for Robotic Manipulation s class project. Te cosen project aims to understand and implement Kevin Egan s non-convex

More information

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines

Lecture 15. Interpolation II. 2 Piecewise polynomial interpolation Hermite splines Lecture 5 Interpolation II Introduction In te previous lecture we focused primarily on polynomial interpolation of a set of n points. A difficulty we observed is tat wen n is large, our polynomial as to

More information

CDF and Survival Function Estimation with Infinite-Order Kernels

CDF and Survival Function Estimation with Infinite-Order Kernels CDF and Survival Function Estimation wit Infinite-Order Kernels Artur Berg and Dimitris N. Politis Abstract An improved nonparametric estimator of te cumulative distribution function CDF) and te survival

More information

Math 312 Lecture Notes Modeling

Math 312 Lecture Notes Modeling Mat 3 Lecture Notes Modeling Warren Weckesser Department of Matematics Colgate University 5 7 January 006 Classifying Matematical Models An Example We consider te following scenario. During a storm, a

More information

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations

Parameter Fitted Scheme for Singularly Perturbed Delay Differential Equations International Journal of Applied Science and Engineering 2013. 11, 4: 361-373 Parameter Fitted Sceme for Singularly Perturbed Delay Differential Equations Awoke Andargiea* and Y. N. Reddyb a b Department

More information

Some Review Problems for First Midterm Mathematics 1300, Calculus 1

Some Review Problems for First Midterm Mathematics 1300, Calculus 1 Some Review Problems for First Midterm Matematics 00, Calculus. Consider te trigonometric function f(t) wose grap is sown below. Write down a possible formula for f(t). Tis function appears to be an odd,

More information