Estimation of the quantile function using Bernstein-Durrmeyer polynomials

Size: px
Start display at page:

Download "Estimation of the quantile function using Bernstein-Durrmeyer polynomials"

Transcription

1 Estimation of the quantile function using Bernstein-Durrmeyer polynomials Andrey Pepelyshev Ansgar Steland Ewaryst Rafaj lowic The work is supported by the German Federal Ministry of the Environment, Nature and Nuclear Safety, grant Hamburg February 20, /23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

2 Contents Contents Bernstein-Durrmeyer operator Properties of the BD estimator of the distribution function Properties of the BD estimator of the quantile function The adaptive choice of N Application in photovoltaics 2/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

3 Bernstein-Durrmeyer operator Bernstein-Durrmeyer operator The BD operator D N (u(x)) has the kernel form D N (u(x)) = 1 where K N (x, z) = (N + 1) N where a i = K N (x, z)u(z)dz i=0 B(N) i D N (u(x)) = (N + 1) u(x)b (N) i (x)dx, B (N) i (x) = N! i!(n i)! xi (1 x) N i. N i=0 (x)b (N) (z), or i a i B (N) i (x) 3/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

4 Bernstein-Durrmeyer operator BD operator for deterministic functions Derriennic (1981) shown that D N (u) converges uniformly, sup D N (u(x)) u(x) 2ω u (N 1/2 ), x [0,1] where ω u denotes the modulus of continuity of u. Chen and Ditzian (1991) established the fact that if and only if ɛ(u) = u(x) D N (u(x)) p = O(N δ/2 ), { } inf u(x) a 0 a 1 x a N x N p = O(N δ ), a 0,...,a N as N for some δ (0, 2) and p {2, }, where f p = ( I f(x) p dx ) 1/p. 4/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

5 Bernstein-Durrmeyer operator BD operator for the distribution function Let F m (x) be the empirical d.f. for a sample X 1,..., X m F [0,1] with density f and f m,n (x) = 1 m m K N (X j, x), Fm,N (x) := x j=1 0 f m,n (t)dt Theorem. (Ciesielski, 1988) If F C[0, 1], then Pr( F m,n (x) F 0 as m, N ) = 1. If f L p [0, 1] and m = N β for some β (0, 0.5), then Pr( f m,n (x) f p = o(1) as N ) = 1. 5/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

6 Bernstein-Durrmeyer operator BD operator for the quantile function Let Q m (x) be the empirical q.f. for a sample X 1,..., X m. Version 1: N Q m,n (x) := D N (Q m (x)) = (N + 1) ã i B (N) i (x), Version 2: ã i = 1 0 i=0 Q m (x)b (N) i (x)dx, i = 1,..., m, Q m,n (x) = (N + 1) â i = 1 m m j=1 N i=0 â i B (N) i (x), X (j) B (N) i ( j 1 m 1 ) Note that ã i are asymptotically equivalent to ã i in mean square (Yang, 1985). 6/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

7 Bernstein-Durrmeyer operator Properties of the BD estimator Lemma. For ã i it is hold Proof. Set Var(ã i ) l=1 w j = (l 1)/m 1 (N + 1) 2 max j=1,...,m Var(X (j)) j/m (j 1)/m B(N) i (t) dt m l/m l=1 (l 1)/m B(N) i (t) dt, j = 1,..., m. Then we have that ( m 2 l/m m Var(ã i ) = B (N) i (t) dt) E (X (j) EX (j) ) 2 w j. Apply the Jensen inequality (E ζ (a ζ Ea ζ )) 2 E ζ (a ζ Ea ζ ) 2, where ζ is a random variable such that P(ζ = j) = w j. 7/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation j=1

8 Bernstein-Durrmeyer operator Properties of the BD estimator Lemma. For the BD estimator it is hold Var( Q m,n (x)) C m := max j=1,...,m Var(X (j)) and 1 Var( Q m,n (x))dx C m. 0 If the derivative of Q(x) is continuous, then C m = O(1/m). Proof. If X 1,..., X m U(0, 1) then X (j) β(j, m j + 1). Note that the variance of the beta distribution β(a, b) is ab (a+b) 2 (a+b+1). Therefore, it is easy to see that C m = O(1/m). 8/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

9 Bernstein-Durrmeyer operator Maximum variance of order statistics Since Q (x) is continuous, then we can write X (j) = Q(U (j) ). Applying the LagrangeTaylor formula for Q(U (j) ) at the point EU (j) = j/(m + 1) = p j, we obtain X (j) = Q(p j ) + (U (j) p j )Q (ζ j ) for some ζ j [min{u (j), p j }, max{u (j), p j }]. Then we have Var(X (j) ) = Var((U (j) p j )Q (ζ j )) max Q 2 (x)e(u (j) p j ) 2 x = p j(1 p j ) m + 2 max Q 2 (x). x 9/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

10 Consistency MSE and MISE consistency Let X 1,..., X m be m random variables with common quantile function Q(x) with a continuous derivative on [a, b] [0, 1] that satises the Chen-Ditzian condition with p =. Then the Bernstein-Durrmeyer estimator with integral weights, Q m,n (x), satises and max E( Q m,n (x) Q(x)) 2 = O(1/m + N δ ) x [a,b] E b a as m and N. ( Q m,n (x) Q(x)) 2 dx = O(1/m + N δ ) 10/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

11 Consistency The BD estimator with correction term Let ẽ l (x) = D l (Q m (x) Q ) m,n (x). Dene the error-corrected Bernstein-Durrmeyer estimator Q m,n,l (x) = Q m,n (x) + ẽ l (x). Theorem. Suppose X 1, X 2,... are i.i.d. random variables with common quantile function Q(x) and distribution function F (x) satisfying x p df (x) < for some 1 p. Then the error-corrected estimator Q m,n,l (x) is consistent in the pth mean for the true quantile function Q(x), Q m,n,l Q p 0, as m and N, l, with probability 1, and in the mean, i.e. E Q m,n,l Q p 0, as m and N, l. 11/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

12 Adaptive choice of N Adaptive choice of N We modify an algorithm from (Golyandina, Pepelyshev, Steland, 2012) whose idea is based on a balance of the closeness of the BD estimator to the empirical q.f. in terms of the law of iterated logarithms and the stability of the number of modes of the corresponding density estimator. Note that the Bernstein polynomial B (N) i (x) can be approximated by the density of the normal distribution φ((i/n x)/s), where s = x(1 x)/n and φ(x) = (2π) 0.5 e x2 /2. Thus, we can introduce the quantity h = 1/ N which plays the role of the bandwidth. 12/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

13 Adaptive choice of N Algorithm of adaptive choice of N 1] Compute { h=max h (0, 1] : max Fm ( Q m, 1/t q 2 (q)) q } 1/R m t (0, h) where F m (x) is the empirical distribution function, z is the smallest integer that is larger or equal to z, and R m = 2 m/ 2 log log m. 13/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

14 Adaptive choice of N Algorithm of adaptive choice of N 2] Dene a set {h 1,..., h n } (0, h) such that max h (0, h) min j=1,...,n h h j is small and compute the sequence M 1,..., M n, where M j is the number of local minimums of the Bernstein-Durrmeyer estimator Q m,n (x) with N = 1/h 2 j. 3] Compute ˇM j = min{m 1, M 2,..., M j }, j = 1,..., n. 14/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

15 Adaptive choice of N Algorithm of adaptive choice of N 4] Divide the set {h 1,..., h n } into groups as follows. Dene a i and b i such that a 1 b 1 < a 2 b 2 <... < a k b k and ˇM i = ˇM j for all h i, h j [a l, b l ] for l {1,..., k}. 5] Compute ĥ = k i=1 a iw i, where w i = (b i a i )/ k j=1 (b j a j ), and then set N = 1/ĥ2. 15/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

16 Adaptive choice of N Consistency of the BD estimator with ˆN Let X 1,..., X m be m random variables with the continuously dierentiable quantile function Q, R m = 2 m 2 log log m. Then the adaptive Bernstein-Durrmeyer estimator Q N(q) m, = D N(Q m (q)) where N is selected by the above algorithm is consistent as m. Moreover, we have the a.s. uniform error bound 1 sup Q <x< m, N (x) F (x) 2 log log m/ m 1 for the estimator Q (x) of the distribution function F (x), m, N and the a.s. uniform error bound sup Q N(q) m, Q(q) 2 2 log log m/ m q for the quantile estimator Q m, N(x). 16/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

17 Invariance principle Adaptive choice of N Theorem. If R m is chosen such that Rm 1 = o(m 1/2 ), then for the Bernstein-Durrmeyer polynomial estimator with data-adaptive selection of the degree N we have { 1 m[ Q m, N (x) F (x)] : < x < } {B0 (F (x)) : < x < }, as m, in the Skorohod space D(R; R), where B 0 (t) = W t tw 1, t [0, 1], is a Brownian bridge process. 17/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

18 Application in photovoltaics Application in photovoltaics Parameters of acceptance sampling for the quality control AQL, the acceptable quality level RQL, the rejectable quality level α, the producer risk β, the consumer risk The sampling plan n, the number of items to be checked from a lot c, the critical value for the T-statistic 18/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

19 Application in photovoltaics The meaning of parameters The lot is accepted if and only if T n > c. The operational characteristic OC n,c (p) = P(T n > c) is the probability of the lot acceptance for given p, the true proportion of non-conforming items. 19/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

20 Application in photovoltaics Requirements for the sampling plan The sampling plan is a minimal solution satisfying the following conditions { OC n,c (p) 1 α for p AQL, OC n,c (p) β for p RQL 20/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

21 Application in photovoltaics Sampling plans for quality control If measurements are distributed according to a distribution G(x) with mean a and variance σ 2, then OC n,c (p) = 1 Φ ( c + ng 1 (1 p) ). The asymptotically optimal sampling plan (n, c) is ( Φ 1 (α) Φ 1 (1 β) ) 2 n = ( F 1 (AQL) F 1 (RQL) ) 2, n ( c = F 1 (AQL) + F 1 (RQL) ), 2 where F (x) = G((x a)/σ) and Φ(x) is the standard normal distribution. 21/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

22 Numerical results Application in photovoltaics Characteristics of estimation of the sampling plan (n, c) for two quantile estimators, α = β = 5%, AQL = 2% and RQL = 5%. Bernstein-Durrmeyer estimator Empirical quantile estimator n c n c m mean std.dev. mean std.dev. mean std.dev. mean std.dev /23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

23 Application in photovoltaics Thank you for your attention! 23/23 A. Pepelyshev, A. Steland Bernstein-Durrmeyer quatile estimation

A new method of nonparametric density estimation

A new method of nonparametric density estimation A new method of nonparametric density estimation Andrey Pepelyshev Cardi December 7, 2011 1/32 A. Pepelyshev A new method of nonparametric density estimation Contents Introduction A new density estimate

More information

Sampling Plans for Control-Inspection Schemes Under Independent and Dependent Sampling Designs With Applications to Photovoltaics

Sampling Plans for Control-Inspection Schemes Under Independent and Dependent Sampling Designs With Applications to Photovoltaics arxiv:1402.2468v1 [math.st] 11 Feb 2014 Sampling Plans for Control-Inspection Schemes Under Independent and Dependent Sampling Designs With Applications to Photovoltaics Ansgar Steland Institute of Statistics

More information

Non-parametric Inference and Resampling

Non-parametric Inference and Resampling Non-parametric Inference and Resampling Exercises by David Wozabal (Last update 3. Juni 2013) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend

More information

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf Lecture 13: 2011 Bootstrap ) R n x n, θ P)) = τ n ˆθn θ P) Example: ˆθn = X n, τ n = n, θ = EX = µ P) ˆθ = min X n, τ n = n, θ P) = sup{x : F x) 0} ) Define: J n P), the distribution of τ n ˆθ n θ P) under

More information

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

f (1 0.5)/n Z =

f (1 0.5)/n Z = Math 466/566 - Homework 4. We want to test a hypothesis involving a population proportion. The unknown population proportion is p. The null hypothesis is p = / and the alternative hypothesis is p > /.

More information

Statistics 3657 : Moment Approximations

Statistics 3657 : Moment Approximations Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Math 141: Lecture 11

Math 141: Lecture 11 Math 141: Lecture 11 The Fundamental Theorem of Calculus and integration methods Bob Hough October 12, 2016 Bob Hough Math 141: Lecture 11 October 12, 2016 1 / 36 First Fundamental Theorem of Calculus

More information

Lecture 11: Probability, Order Statistics and Sampling

Lecture 11: Probability, Order Statistics and Sampling 5-75: Graduate Algorithms February, 7 Lecture : Probability, Order tatistics and ampling Lecturer: David Whitmer cribes: Ilai Deutel, C.J. Argue Exponential Distributions Definition.. Given sample space

More information

Bernstein-Durrmeyer operators with arbitrary weight functions

Bernstein-Durrmeyer operators with arbitrary weight functions Bernstein-Durrmeyer operators with arbitrary weight functions Elena E. Berdysheva German University of Technology in Oman Muscat, Sultanate of Oman Bernstein-Durrmeyer operators with arbitrary weight functions

More information

Monte Carlo Integration I [RC] Chapter 3

Monte Carlo Integration I [RC] Chapter 3 Aula 3. Monte Carlo Integration I 0 Monte Carlo Integration I [RC] Chapter 3 Anatoli Iambartsev IME-USP Aula 3. Monte Carlo Integration I 1 There is no exact definition of the Monte Carlo methods. In the

More information

A Brief Analysis of Central Limit Theorem. SIAM Chapter Florida State University

A Brief Analysis of Central Limit Theorem. SIAM Chapter Florida State University 1 / 36 A Brief Analysis of Central Limit Theorem Omid Khanmohamadi (okhanmoh@math.fsu.edu) Diego Hernán Díaz Martínez (ddiazmar@math.fsu.edu) Tony Wills (twills@math.fsu.edu) Kouadio David Yao (kyao@math.fsu.edu)

More information

Introduction to Self-normalized Limit Theory

Introduction to Self-normalized Limit Theory Introduction to Self-normalized Limit Theory Qi-Man Shao The Chinese University of Hong Kong E-mail: qmshao@cuhk.edu.hk Outline What is the self-normalization? Why? Classical limit theorems Self-normalized

More information

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i

NADARAYA WATSON ESTIMATE JAN 10, 2006: version 2. Y ik ( x i NADARAYA WATSON ESTIMATE JAN 0, 2006: version 2 DATA: (x i, Y i, i =,..., n. ESTIMATE E(Y x = m(x by n i= ˆm (x = Y ik ( x i x n i= K ( x i x EXAMPLES OF K: K(u = I{ u c} (uniform or box kernel K(u = u

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

Math Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT

Math Camp II. Calculus. Yiqing Xu. August 27, 2014 MIT Math Camp II Calculus Yiqing Xu MIT August 27, 2014 1 Sequence and Limit 2 Derivatives 3 OLS Asymptotics 4 Integrals Sequence Definition A sequence {y n } = {y 1, y 2, y 3,..., y n } is an ordered set

More information

Local Polynomial Regression

Local Polynomial Regression VI Local Polynomial Regression (1) Global polynomial regression We observe random pairs (X 1, Y 1 ),, (X n, Y n ) where (X 1, Y 1 ),, (X n, Y n ) iid (X, Y ). We want to estimate m(x) = E(Y X = x) based

More information

Statistical distributions: Synopsis

Statistical distributions: Synopsis Statistical distributions: Synopsis Basics of Distributions Special Distributions: Binomial, Exponential, Poisson, Gamma, Chi-Square, F, Extreme-value etc Uniform Distribution Empirical Distributions Quantile

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

Self-normalized Cramér-Type Large Deviations for Independent Random Variables

Self-normalized Cramér-Type Large Deviations for Independent Random Variables Self-normalized Cramér-Type Large Deviations for Independent Random Variables Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu 1. Introduction Let X, X

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics ON SIMULTANEOUS APPROXIMATION FOR CERTAIN BASKAKOV DURRMEYER TYPE OPERATORS VIJAY GUPTA, MUHAMMAD ASLAM NOOR AND MAN SINGH BENIWAL School of Applied

More information

Time Series and Forecasting Lecture 4 NonLinear Time Series

Time Series and Forecasting Lecture 4 NonLinear Time Series Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

The stochastic modeling

The stochastic modeling February 211 Gdansk A schedule of the lecture Generation of randomly arranged numbers Monte Carlo Strong Law of Large Numbers SLLN We use SAS 9 For convenience programs will be given in an appendix Programs

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and

More information

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012 Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 202 BOUNDS AND ASYMPTOTICS FOR FISHER INFORMATION IN THE CENTRAL LIMIT THEOREM

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

1 Weak Convergence in R k

1 Weak Convergence in R k 1 Weak Convergence in R k Byeong U. Park 1 Let X and X n, n 1, be random vectors taking values in R k. These random vectors are allowed to be defined on different probability spaces. Below, for the simplicity

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Non-parametric Inference and Resampling

Non-parametric Inference and Resampling Non-parametric Inference and Resampling Exercises by David Wozabal (Last update. Juni 010) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend surfing

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

Variance Function Estimation in Multivariate Nonparametric Regression

Variance Function Estimation in Multivariate Nonparametric Regression Variance Function Estimation in Multivariate Nonparametric Regression T. Tony Cai 1, Michael Levine Lie Wang 1 Abstract Variance function estimation in multivariate nonparametric regression is considered

More information

Application of Bernstein Polynomials for smooth estimation of a distribution and density function

Application of Bernstein Polynomials for smooth estimation of a distribution and density function Journal of Statistical Planning and Inference 105 (2002) 377 392 www.elsevier.com/locate/jspi Application of Bernstein Polynomials for smooth estimation of a distribution and density function G. Jogesh

More information

Tail bound inequalities and empirical likelihood for the mean

Tail bound inequalities and empirical likelihood for the mean Tail bound inequalities and empirical likelihood for the mean Sandra Vucane 1 1 University of Latvia, Riga 29 th of September, 2011 Sandra Vucane (LU) Tail bound inequalities and EL for the mean 29.09.2011

More information

large number of i.i.d. observations from P. For concreteness, suppose

large number of i.i.d. observations from P. For concreteness, suppose 1 Subsampling Suppose X i, i = 1,..., n is an i.i.d. sequence of random variables with distribution P. Let θ(p ) be some real-valued parameter of interest, and let ˆθ n = ˆθ n (X 1,..., X n ) be some estimate

More information

Functions of Several Random Variables (Ch. 5.5)

Functions of Several Random Variables (Ch. 5.5) (Ch. 5.5) Iowa State University Mar 7, 2013 Iowa State University Mar 7, 2013 1 / 37 Outline Iowa State University Mar 7, 2013 2 / 37 several random variables We often consider functions of random variables

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Lecture 21. Hypothesis Testing II

Lecture 21. Hypothesis Testing II Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric

More information

4 Invariant Statistical Decision Problems

4 Invariant Statistical Decision Problems 4 Invariant Statistical Decision Problems 4.1 Invariant decision problems Let G be a group of measurable transformations from the sample space X into itself. The group operation is composition. Note that

More information

On detection of unit roots generalizing the classic Dickey-Fuller approach

On detection of unit roots generalizing the classic Dickey-Fuller approach On detection of unit roots generalizing the classic Dickey-Fuller approach A. Steland Ruhr-Universität Bochum Fakultät für Mathematik Building NA 3/71 D-4478 Bochum, Germany February 18, 25 1 Abstract

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

A Bootstrap View on Dickey-Fuller Control Charts for AR(1) Series

A Bootstrap View on Dickey-Fuller Control Charts for AR(1) Series AUSTRIAN JOURNAL OF STATISTICS Volume 35 (26), Number 2&3, 339 346 A Bootstrap View on Dickey-Fuller Control Charts for AR(1) Series Ansgar Steland RWTH Aachen University, Germany Abstract: Dickey-Fuller

More information

Introduction to Regression

Introduction to Regression Introduction to Regression p. 1/97 Introduction to Regression Chad Schafer cschafer@stat.cmu.edu Carnegie Mellon University Introduction to Regression p. 1/97 Acknowledgement Larry Wasserman, All of Nonparametric

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018

Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018 Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018 Topics: consistent estimators; sub-σ-fields and partial observations; Doob s theorem about sub-σ-field measurability;

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27 Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple

More information

Introduction to Regression

Introduction to Regression Introduction to Regression Chad M. Schafer May 20, 2015 Outline General Concepts of Regression, Bias-Variance Tradeoff Linear Regression Nonparametric Procedures Cross Validation Local Polynomial Regression

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Entrance Exam, Real Analysis September 1, 2017 Solve exactly 6 out of the 8 problems

Entrance Exam, Real Analysis September 1, 2017 Solve exactly 6 out of the 8 problems September, 27 Solve exactly 6 out of the 8 problems. Prove by denition (in ɛ δ language) that f(x) = + x 2 is uniformly continuous in (, ). Is f(x) uniformly continuous in (, )? Prove your conclusion.

More information

Nonlife Actuarial Models. Chapter 14 Basic Monte Carlo Methods

Nonlife Actuarial Models. Chapter 14 Basic Monte Carlo Methods Nonlife Actuarial Models Chapter 14 Basic Monte Carlo Methods Learning Objectives 1. Generation of uniform random numbers, mixed congruential method 2. Low discrepancy sequence 3. Inversion transformation

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

Chapter 1. Density Estimation

Chapter 1. Density Estimation Capter 1 Density Estimation Let X 1, X,..., X n be observations from a density f X x. Te aim is to use only tis data to obtain an estimate ˆf X x of f X x. Properties of f f X x x, Parametric metods f

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Spectral Representation of Random Processes

Spectral Representation of Random Processes Spectral Representation of Random Processes Example: Represent u(t,x,q) by! u K (t, x, Q) = u k (t, x) k(q) where k(q) are orthogonal polynomials. Single Random Variable:! Let k (Q) be orthogonal with

More information

Robustní monitorování stability v modelu CAPM

Robustní monitorování stability v modelu CAPM Robustní monitorování stability v modelu CAPM Ondřej Chochola, Marie Hušková, Zuzana Prášková (MFF UK) Josef Steinebach (University of Cologne) ROBUST 2012, Němčičky, 10.-14.9. 2012 Contents Introduction

More information

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy

Computation Of Asymptotic Distribution. For Semiparametric GMM Estimators. Hidehiko Ichimura. Graduate School of Public Policy Computation Of Asymptotic Distribution For Semiparametric GMM Estimators Hidehiko Ichimura Graduate School of Public Policy and Graduate School of Economics University of Tokyo A Conference in honor of

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Comprehensive Examination Quantitative Methods Spring, 2018

Comprehensive Examination Quantitative Methods Spring, 2018 Comprehensive Examination Quantitative Methods Spring, 2018 Instruction: This exam consists of three parts. You are required to answer all the questions in all the parts. 1 Grading policy: 1. Each part

More information

Smooth nonparametric estimation of a quantile function under right censoring using beta kernels

Smooth nonparametric estimation of a quantile function under right censoring using beta kernels Smooth nonparametric estimation of a quantile function under right censoring using beta kernels Chanseok Park 1 Department of Mathematical Sciences, Clemson University, Clemson, SC 29634 Short Title: Smooth

More information

Department of Econometrics and Business Statistics

Department of Econometrics and Business Statistics ISSN 440-77X Australia Department of Econometrics and Business Statistics http://www.buseco.monash.edu.au/depts/ebs/pubs/wpapers/ Nonlinear Regression with Harris Recurrent Markov Chains Degui Li, Dag

More information

Asymptotic statistics using the Functional Delta Method

Asymptotic statistics using the Functional Delta Method Quantiles, Order Statistics and L-Statsitics TU Kaiserslautern 15. Februar 2015 Motivation Functional The delta method introduced in chapter 3 is an useful technique to turn the weak convergence of random

More information

Asymptotic results for empirical measures of weighted sums of independent random variables

Asymptotic results for empirical measures of weighted sums of independent random variables Asymptotic results for empirical measures of weighted sums of independent random variables B. Bercu and W. Bryc University Bordeaux 1, France Workshop on Limit Theorems, University Paris 1 Paris, January

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists

More information

A converse Gaussian Poincare-type inequality for convex functions

A converse Gaussian Poincare-type inequality for convex functions Statistics & Proaility Letters 44 999 28 290 www.elsevier.nl/locate/stapro A converse Gaussian Poincare-type inequality for convex functions S.G. Bokov a;, C. Houdre ; ;2 a Department of Mathematics, Syktyvkar

More information

Bahadur representations for bootstrap quantiles 1

Bahadur representations for bootstrap quantiles 1 Bahadur representations for bootstrap quantiles 1 Yijun Zuo Department of Statistics and Probability, Michigan State University East Lansing, MI 48824, USA zuo@msu.edu 1 Research partially supported by

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Kernel Density Estimation

Kernel Density Estimation EECS 598: Statistical Learning Theory, Winter 2014 Topic 19 Kernel Density Estimation Lecturer: Clayton Scott Scribe: Yun Wei, Yanzhen Deng Disclaimer: These notes have not been subjected to the usual

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

Techinical Proofs for Nonlinear Learning using Local Coordinate Coding

Techinical Proofs for Nonlinear Learning using Local Coordinate Coding Techinical Proofs for Nonlinear Learning using Local Coordinate Coding 1 Notations and Main Results Denition 1.1 (Lipschitz Smoothness) A function f(x) on R d is (α, β, p)-lipschitz smooth with respect

More information

SCHWARTZ SPACES ASSOCIATED WITH SOME NON-DIFFERENTIAL CONVOLUTION OPERATORS ON HOMOGENEOUS GROUPS

SCHWARTZ SPACES ASSOCIATED WITH SOME NON-DIFFERENTIAL CONVOLUTION OPERATORS ON HOMOGENEOUS GROUPS C O L L O Q U I U M M A T H E M A T I C U M VOL. LXIII 1992 FASC. 2 SCHWARTZ SPACES ASSOCIATED WITH SOME NON-DIFFERENTIAL CONVOLUTION OPERATORS ON HOMOGENEOUS GROUPS BY JACEK D Z I U B A Ń S K I (WROC

More information

Week 9 The Central Limit Theorem and Estimation Concepts

Week 9 The Central Limit Theorem and Estimation Concepts Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population

More information

Random Matrices and Multivariate Statistical Analysis

Random Matrices and Multivariate Statistical Analysis Random Matrices and Multivariate Statistical Analysis Iain Johnstone, Statistics, Stanford imj@stanford.edu SEA 06@MIT p.1 Agenda Classical multivariate techniques Principal Component Analysis Canonical

More information

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function

More information

Math 576: Quantitative Risk Management

Math 576: Quantitative Risk Management Math 576: Quantitative Risk Management Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Week 11 Haijun Li Math 576: Quantitative Risk Management Week 11 1 / 21 Outline 1

More information

Bernoulli Polynomials

Bernoulli Polynomials Chapter 4 Bernoulli Polynomials 4. Bernoulli Numbers The generating function for the Bernoulli numbers is x e x = n= B n n! xn. (4.) That is, we are to expand the left-hand side of this equation in powers

More information

Stat 5101 Notes: Algorithms

Stat 5101 Notes: Algorithms Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................

More information

Uniform Inference for Conditional Factor Models with Instrumental and Idiosyncratic Betas

Uniform Inference for Conditional Factor Models with Instrumental and Idiosyncratic Betas Uniform Inference for Conditional Factor Models with Instrumental and Idiosyncratic Betas Yuan Xiye Yang Rutgers University Dec 27 Greater NY Econometrics Overview main results Introduction Consider a

More information

Can we do statistical inference in a non-asymptotic way? 1

Can we do statistical inference in a non-asymptotic way? 1 Can we do statistical inference in a non-asymptotic way? 1 Guang Cheng 2 Statistics@Purdue www.science.purdue.edu/bigdata/ ONR Review Meeting@Duke Oct 11, 2017 1 Acknowledge NSF, ONR and Simons Foundation.

More information

Markov Sonin Gaussian rule for singular functions

Markov Sonin Gaussian rule for singular functions Journal of Computational and Applied Mathematics 169 (2004) 197 212 www.elsevier.com/locate/cam Markov Sonin Gaussian rule for singular functions G.Mastroianni, D.Occorsio Dipartimento di Matematica, Universita

More information

Estimation of a quadratic regression functional using the sinc kernel

Estimation of a quadratic regression functional using the sinc kernel Estimation of a quadratic regression functional using the sinc kernel Nicolai Bissantz Hajo Holzmann Institute for Mathematical Stochastics, Georg-August-University Göttingen, Maschmühlenweg 8 10, D-37073

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations Research

More information

Advanced Statistics II: Non Parametric Tests

Advanced Statistics II: Non Parametric Tests Advanced Statistics II: Non Parametric Tests Aurélien Garivier ParisTech February 27, 2011 Outline Fitting a distribution Rank Tests for the comparison of two samples Two unrelated samples: Mann-Whitney

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Introduction to Regression

Introduction to Regression Introduction to Regression Chad M. Schafer cschafer@stat.cmu.edu Carnegie Mellon University Introduction to Regression p. 1/100 Outline General Concepts of Regression, Bias-Variance Tradeoff Linear Regression

More information

ISyE 6644 Fall 2014 Test 3 Solutions

ISyE 6644 Fall 2014 Test 3 Solutions 1 NAME ISyE 6644 Fall 14 Test 3 Solutions revised 8/4/18 You have 1 minutes for this test. You are allowed three cheat sheets. Circle all final answers. Good luck! 1. [4 points] Suppose that the joint

More information

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment: Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230

More information

CHAPTER 3: LARGE SAMPLE THEORY

CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 1 CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 2 Introduction CHAPTER 3 LARGE SAMPLE THEORY 3 Why large sample theory studying small sample property is usually

More information

MIT Spring 2015

MIT Spring 2015 Assessing Goodness Of Fit MIT 8.443 Dr. Kempthorne Spring 205 Outline 2 Poisson Distribution Counts of events that occur at constant rate Counts in disjoint intervals/regions are independent If intervals/regions

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information