Statistical inference on Lévy processes

Size: px
Start display at page:

Download "Statistical inference on Lévy processes"

Transcription

1 Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop th March

2 Outline Motivation and preliminaries 1 Motivation and preliminaries Motivation Non-parametric statistics Lévy processes 2 3 Main result Extensions 4

3 Motivation Motivation and preliminaries Motivation Non-parametric statistics Lévy processes Lévy processes form the fundamental building block for stochastic continuous-time models with jumps. They are highly useful models in many applications. However, in these applications the data corresponds to discrete-time observations of a stochastic process. Recall that an estimator is any measurable function of the data, and it is consistent if it converges in probability to the true value.

4 Motivation Motivation and preliminaries Motivation Non-parametric statistics Lévy processes Therefore, finding consistent estimators of the underlying continuous-time process and functionals of it is of vital importance for these applications. It is also important to understand how fast these estimations approach the true values as the number of observations grows. These theoretical questions proved to be technically challenging which, together with the current growing demand for this type of results, explains why there were hardly any important results in the area before the last decade.

5 Example 1: regression Motivation Non-parametric statistics Lévy processes Suppose we are given n pairs of observations (X 1, Y 1 ),..., (X n, Y n ) and the response variable Y is related to the covariate X by a functional relationship Y i = m(x i ) + ε i i = 1,... n, where m : R R is some unknown function and the ε i represent i.i.d. errors independent of the X i.

6 Motivation Non-parametric statistics Lévy processes

7 Example 1: regression Motivation Non-parametric statistics Lévy processes Simple linear regression: assume m(x i ) = a + bx i for some unknown a, b R. The most famous unbiased and consistent estimator of the parameters is ordinary least squares. Non-parametric regression: as few assumptions on m as possible. In general, it is assumed to live in a metric functional space (Σ, d). E.g. (Σ, d) = L 2. Therefore, many estimators are constructed using bases of these spaces, such as wavelets, splines of polynomials. Kernel based estimators are also important. E.g. the Nadaraya-Watson estimator.

8 Example 2: distribution function Motivation Non-parametric statistics Lévy processes Let X 1,... X n be i.i.d. real random variables with distribution function F. Then it belongs to the functional space Σ = {G : R [0, 1] : lim G(x) = 0, lim G(x) = 1, x x G increasing and right continuous.} with the L metric. The natural estimator of F is F n (x) = 1 n n 1 (,x] (X i ), x R. i=1

9 Motivation Non-parametric statistics Lévy processes

10 Example 3: density function Motivation Non-parametric statistics Lévy processes Let X 1,... X n be i.i.d. real random variables with density function f. Then f belongs to the functional space Σ = {g : R [0, ) : g(x)dx = 1} with the L 1 norm. Here there is no natural estimator of f, but a typical one is the kernel density estimator f n (x, h) = 1 h K( /h) P n = 1 nh R n ( x Xi K h where K is a non-negative kernel that integrates to one, h is a bandwidth and P n = 1 n n i=1 δ X i is the empirical measure. i=1 ),

11 Motivation Non-parametric statistics Lévy processes Figure: Grey: true density (standard normal); Green: KDE with h=2; Black: h=0.337; Red: h=0.05.

12 Minimax rate of estimation Motivation Non-parametric statistics Lévy processes Let f be a function in a normed functional space (Σ, d). Question: what is the best rate in terms of risk under d-loss at which f can be estimated by an estimator f n? Definition Under the setting above, the minimax rate of the problem is the sequence r n of positive real numbers such that inf f n sup rn 1 E f [d(f, f n )] f Σ is bounded away from zero and infinity as n.

13 Minimax rate of estimation Motivation Non-parametric statistics Lévy processes Vaguely speaking, the minimax rate for a metric space (Σ, d) is the best rate at which an unknown function f in Σ can be estimated. It depends both on the complexity of Σ and on the metric d. Examples: Distribution function: the minimax rate in distribution function estimation is r n = 1/ n and, by Donsker s theorem (more details later), this rate is achieved by the estimator given by the empirical distribution function. Density function: denote by C s any Hölder ball of regularity s > 0 and let f C s Σ equipped with the L norm. Then it is well known that, up to logarithmic factors, the minimax rate is r n = n s/(2s+1) > 1/ n.

14 Minimax rate of estimation Motivation Non-parametric statistics Lévy processes In parametric models, where Σ R k for some k N and d may be the Euclidean norm, the minimax rate is always r n = 1/ n. A maximum likelihood-type of estimator is usually available and it achieves the 1/ n rate of convergence in view of the central limit theorem. In non-parametric models r n is in general larger than 1/ n (i.e. the best rate of convergence is slower than in the parametric case) and it is equal to it only in some cases. In these particular cases many times we also have distributional limits as in the parametric case.

15 Minimax rate of estimation Motivation Non-parametric statistics Lévy processes Therefore, we can divide statistical problems into two: 1 Those in which (Σ, d) admits n-consistent estimators ( statistically finite-dimensional models ); 2 And those in which (Σ, d) does not. Type 1. includes interesting probabilistic problems. The statistical interpretation of the results often proceeds along the lines of the classical parametric theory. Statistical inference (tests, confidence sets, etc.) can usually be made without serious difficulty. Type 2. is statistically more involved. The rate of estimation typically depends on specific geometric/analytic properties of Σ. Honest statistical inference is a highly nontrivial problem.

16 Definition Motivation and preliminaries Motivation Non-parametric statistics Lévy processes A Lévy process L = (L t ) t 0 is a continuous-time stochastic process satisfying the following properties: 1 L 0 = 0 a.s. 2 For any 0 t 0 t 1... t n, the increments are independent. L t0, L t1 L t0,..., L tn L tn 1 3 For any 0 s t L t L s is equal in distribution to L t s. 4 The sample paths of L are a.s. right continuous with left limits.

17 Characteristic function Motivation Non-parametric statistics Lévy processes A Lévy process is fully characterised by its characteristic function ϕ t (u) := E[exp(iuL t )]. It satisfies that ϕ t (u) = exp(tψ(u)), where ψ(u) := iγu σ2 ( 2 u2 + e iux ) 1 iux1 x <1 ν(dx), u R. R The function ψ is determined by the Lévy triplet (γ, σ, ν). It comprises: a drift-like part γ R; a volatility parameter σ 0; and the so-called Lévy measure ν, which is non-negative, σ-finite and satisfies R ( 1 x 2 ) ν(dx) <.

18 Interpretation of a Lévy process Motivation Non-parametric statistics Lévy processes Informally, any Lévy process L can be decomposed into independent components L t = σb t + γt + J t, where J is a pure jump process whose jumps are determined by the Lévy measure ν.

19 Motivation Non-parametric statistics Lévy processes Estimating L through its characteristic function The Lévy triplet comprises the characteristics that uniquely determine the Lévy process. Therefore we will consider the problem of estimating the Lévy triplet of a Lévy process or functionals of it from time-discrete observations of the process. As one may already be expecting, the challenges arise when dealing with the jump measure. Since there is only a moment assumption on it, this is a problem of non-parametric statistics.

20 Motivation Non-parametric statistics Lévy processes Assumptions on the observations of L In the following let us assume that we have n observations L ti of L at equidistant time steps t i = i, i = 0, 1,..., n, for some > 0 fixed. Thus, by the definition of a Lévy process and defining X i := L ti L ti 1, i = 1,..., n, we equivalently observe n independent realisations of a random variable X with characteristic function ϕ(u) = exp( ψ(u)), where ψ is as before.

21 Neumann and Reiß (2009) constructed a very general consistent estimator for the Lévy characteristics. The estimator concerning the Lévy measure does not estimate the measure itself but functionals of it, which is generally what is used in applications. They also find the minimax rates of the problem for a rich set of distances d, depending on the decay of the characteristic function of the Lévy process (this is directly related to ν and hence different decays give different Σ spaces from previous section). They show that the estimator achieves these optimal rates.

22 Main theorem Theorem (Neumann and Reiß, 2009) Let (ˆν σ ) n be the estimator of ν σ (dx) = σ 2 δ 0 (dx) + x 2 ν(dx) and let f be such that (1 + u ) s Ff (u) du 1, where F denotes the Fourier transform. Then if r n denotes the rate of convergence of f (ˆνσ ) n to f ν σ it holds that (log n) s/2 if σ > 0 : Re(log(ϕ(u))) σ 2 u 2 /2, r n = (log n) s if α > 0 : Re(log(ϕ(u))) α u, n s/(2β) n 1/2 if C, β > 0 : ϕ(u) C(1 + u ) β, and it is optimal for the (Σ, d) spaces they consider.

23 Drawbacks of the result This result is highly relevant as it informs us of the best rates of convergence we should expect when estimating functionals of the Lévy measure. In particular, we should expect to be in the n world when the characteristic function as polynomial decay. However it does not include any distributional limits. Hence no interesting statistical inference procedures such has confidence bands or tests can be derived from it; and it does not apply to estimating the cummulative function of the Lévy measure, which is a very natural object to estimate.

24 Definitions Motivation and preliminaries Main result Extensions A cummulative-type of function of the Lévy measure can be defined as (t is a space variable, not time) N(t) = t t ν(dx) if t < 0, ν(dx) if t > 0. Note that N(0) is not defined in general. Let us also define L ζ := L ((, ζ], [ζ, )) for some ζ > 0 fixed and { 1 g t (x) = x 1 (,t](x) if t < 0, 1 x 1 [t, )(x) if t > 0.

25 Main result Motivation and preliminaries Main result Extensions Motivated by the results of Neumann and Reiß (2009) and by its drawbacks, Nickl and Reiß (2012) constructed a natural estimator N n for N and proved a Donsker-type of theorem for the cummulative Lévy measure under the assumption of the characteristic function ϕ having polynomial decay.

26 Main result Motivation and preliminaries Main result Extensions Theorem (Nickl and Reiß, 2012) Assume ϕ has polynomial decay, ν has 2 + ε-bounded moment (not necessary) and xν is of bounded variation and has bounded Lebesgue density. Then ) n( Nn N G ϕ as n in distribution in L ζ, where Gϕ is a centered Gaussian random variable on L ζ with covariance structure Σ t,s = 1 ( ) 2 F 1 [ϕ 1 ( )] ( g t ( )) (u) R ( ) F 1 [ϕ 1 ( )] ( g s ( )) (u)p(du) t, s ζ and P is the distribution of X (the increment of the Lévy process).

27 Main result Extensions

28 Main result Extensions Consequences and difficulties of the theorem Thanks to the distributional character of this result, Kolmogorov-Smirnov-type of tests and confidence bands for N can be derived. There are two main difficulties in the proof of the theorem: one of them concerns understanding the pseudo-differential operator given by the convolution with F 1 [ϕ 1 ( )], and the other one has to do with overcoming the unboundedness of the estimator of N.

29 Main result Extensions Difficulties: the pseudo-differential operator 1 The operator f F 1 [ϕ 1 ( )] f is a pseudo differential operator which fractionally differentiates f. 2 When applied to f ( ) = 1 (,t] ( ), t < 0, it induces an integrable singularity at t. 3 In general it appears integrated against P which has an integrable singularity at the origin. 4 However, if we let t be arbitrarily close to 0 the singularity becomes non integrable. This roughly explains why the process is defined in L ζ with ζ > 0, although it also brings other difficulties in the proofs.

30 Main result Extensions Difficulties: the unboundedness of the estimator In order to understand the second difficulty let us derive the estimator N n they use. The assumption of the polynomial decay of ϕ automatically restrics the Lévy processes to those which have local finite variation. This implies that the Lévy exponent takes the form ψ(u) = iγu + (e iux 1)ν(dx). Suppose γ = 0 (it is possible to prove γ can be neglected wlog). Differentiating and using that ψ(u) = log(ϕ(u)) 1 ϕ (u) i ϕ(u) = iψ (u) = e iux xν(dx) = F[xν](u) R

31 Main result Extensions Difficulties: the unboundedness of the estimator Inverting the Fourier transform, diving the LHS and RHS by x and integrating from to t < 0 with respect to x gives 1 t 1 i x F 1[ ϕ ] t (x) dx = ν(dx). ϕ Thefore we can estimate the RHS by the empirical version of the LHS. Namely 1 t 1 i x F 1[ ϕ ] n (x) dx, ϕ n where ϕ n (u) = 1 n n e iux k, and ϕ n(u) = 1 n k=1 n i X k e iux k. k=1

32 Main result Extensions Difficulties: the unboundedness of the estimator However, the Fourier inverse of ϕ n/ϕ n may not be well-definedso we induce a smooth spectral cutoff by multiplying this ratio by F[K h ], K h ( ) = K( /h)/h, where K is a differentiable kernel integrating to one, decaying sufficiently fast and whose Fourier transform has compact support. Then the estimator is defined as N n (t) = 1 i R g t (x) F 1[ ϕ ] n F[K h ] (x) dx, t > 0. ϕ n

33 Main result Extensions Difficulties: the unboundedness of the estimator 1 The spectral cutoff guarantees that N n is well-defined with probability tending to one. 2 Nevertheless, for n finite, n( N n N) may not be bounded and therefore we cannot apply the classical empirical process theory to conclude the Donsker-type of theorem. 3 Giné and Nickl (2008) developed the theory of smoothed empirical processes which shows that, under certain conditions, smoothed empirical processes may converge in situations where the unsmoothed process does not. 4 This is in fact what happens here and it is the first application of this novel theory.

34 Main result Motivation and preliminaries Main result Extensions Theorem (Nickl and Reiß, 2012) Assume ϕ has polynomial decay, ν has 2 + ε-bounded moment (not necessary) and xν is of bounded variation and has bounded Lebesgue density. Then ) n( Nn N G ϕ as n in distribution in L ζ, where Gϕ is a centered Gaussian random variable on L ζ with covariance structure Σ t,s = 1 ( ) 2 F 1 [ϕ 1 ( )] ( g t ( )) (u) R ( ) F 1 [ϕ 1 ( )] ( g s ( )) (u)p(du) t, s ζ and P is the distribution of X (the increment of the Lévy process).

35 Tests and confidence bands Main result Extensions As already mentioned, because of the distributional nature of the result, valuable statistical inference procedures such as goodness of fit tests and confidence bands can be derived from it. Constructing these confidence bands follows directly from the theorem. However, they have to be constructed for t ζ > 0, so we cannot probabilistically bound the Lévy measure around the origin which is of great importance for applications. For this reason we are trying to understand how n( Nn (t n ) N(t n )) behaves as n when t n 0. Let us go into more details of the latter.

36 Main result Extensions Understanding the Lévy measure at the origin In general, N n and N have a discontinuity at the origin. To compensate this we have to multiply the process by a positive power of t n. Namely n t ρ n ( N n (t n ) N(t n )). From the proofs in Nickl and Reiß (2012) it is not hard to prove that ρ 1 necessarily. Furthermore, there is strong evidence that ρ depends on the Lévy process to be estimated, which also brings the problem of estimating it from the data. We still do not fully understand this and it again involves, among other things, understanding the effect of convolving with F 1 [ϕ 1 ( )].

37 Main result Extensions Understanding the Lévy measure at the origin In addition to the confidence band for N around the origin, this result would inform us of something else. The speed at which we approach the origin when constructing this band is given by t n, so it is arbitrarily fast. However, the term n t ρ n gives us the speed at which the random variable n t ρ n ( N n (t n ) N(t n )) approximates a normal distribution (this is not evident from what we have said). Hence we observe a trade off between the speed and the accuracy of approximation very relevant for applications.

38 Other extensions Motivation and preliminaries Main result Extensions A question of particular interest in the area of statistics for stochastic processes is whether one can allow for high-frequency observation regimes n 0. This will be published soon by Nickl et al. The estimator N n of course attains the minimax rate for the problem but, is it assymptotically efficient? I.e. is the limiting variance for each t the smallest as possible? It is in fact the case that it asymptotically achieves the Cramer-Rao bound and this will soon be published by Nickl et al. This result is only for one functional of the Lévy measure, namely the cummulative function of it, but Donsker-type of theorems for more general functionals of ν are highly relevant extensions of the result.

39 In order of appearance: Neumann and Reiß, Nonparametric estimation for Lévy processes from low-frequency observations. Bernoulli 15(1), Nickl and Reiß, J.Funct. Anal., Giné and Nickl, Uniform central limit theorems for kernel density estimators. Probab. Theory Related Fields 141,

40 Thanks for your attention!

Lecture Characterization of Infinitely Divisible Distributions

Lecture Characterization of Infinitely Divisible Distributions Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly

More information

arxiv: v3 [math.st] 4 Dec 2014

arxiv: v3 [math.st] 4 Dec 2014 High-frequency Donsker theorems for Lévy measures Richard Nickl *, Markus Reiß, Jakob Söhl *, Mathias Trabs University of Cambridge and Humboldt-Universität zu Berlin May 11, 2018 arxiv:1310.2523v3 [math.st]

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Estimation of the characteristics of a Lévy process observed at arbitrary frequency

Estimation of the characteristics of a Lévy process observed at arbitrary frequency Estimation of the characteristics of a Lévy process observed at arbitrary frequency Johanna Kappus Institute of Mathematics Humboldt-Universität zu Berlin kappus@math.hu-berlin.de Markus Reiß Institute

More information

Nonparametric Inference via Bootstrapping the Debiased Estimator

Nonparametric Inference via Bootstrapping the Debiased Estimator Nonparametric Inference via Bootstrapping the Debiased Estimator Yen-Chi Chen Department of Statistics, University of Washington ICSA-Canada Chapter Symposium 2017 1 / 21 Problem Setup Let X 1,, X n be

More information

Infinitely divisible distributions and the Lévy-Khintchine formula

Infinitely divisible distributions and the Lévy-Khintchine formula Infinitely divisible distributions and the Cornell University May 1, 2015 Some definitions Let X be a real-valued random variable with law µ X. Recall that X is said to be infinitely divisible if for every

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Supervised Learning: Non-parametric Estimation

Supervised Learning: Non-parametric Estimation Supervised Learning: Non-parametric Estimation Edmondo Trentin March 18, 2018 Non-parametric Estimates No assumptions are made on the form of the pdfs 1. There are 3 major instances of non-parametric estimates:

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

A Note on Data-Adaptive Bandwidth Selection for Sequential Kernel Smoothers

A Note on Data-Adaptive Bandwidth Selection for Sequential Kernel Smoothers 6th St.Petersburg Workshop on Simulation (2009) 1-3 A Note on Data-Adaptive Bandwidth Selection for Sequential Kernel Smoothers Ansgar Steland 1 Abstract Sequential kernel smoothers form a class of procedures

More information

Small ball probabilities and metric entropy

Small ball probabilities and metric entropy Small ball probabilities and metric entropy Frank Aurzada, TU Berlin Sydney, February 2012 MCQMC Outline 1 Small ball probabilities vs. metric entropy 2 Connection to other questions 3 Recent results for

More information

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1]. 1 Introduction The content of these notes is also covered by chapter 3 section B of [1]. Diffusion equation and central limit theorem Consider a sequence {ξ i } i=1 i.i.d. ξ i = d ξ with ξ : Ω { Dx, 0,

More information

Additive Isotonic Regression

Additive Isotonic Regression Additive Isotonic Regression Enno Mammen and Kyusang Yu 11. July 2006 INTRODUCTION: We have i.i.d. random vectors (Y 1, X 1 ),..., (Y n, X n ) with X i = (X1 i,..., X d i ) and we consider the additive

More information

Bayesian Regularization

Bayesian Regularization Bayesian Regularization Aad van der Vaart Vrije Universiteit Amsterdam International Congress of Mathematicians Hyderabad, August 2010 Contents Introduction Abstract result Gaussian process priors Co-authors

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations Research

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 The Annals of Statistics 1997, Vol. 25, No. 6, 2512 2546 OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 By O. V. Lepski and V. G. Spokoiny Humboldt University and Weierstrass Institute

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas

Density estimation Nonparametric conditional mean estimation Semiparametric conditional mean estimation. Nonparametrics. Gabriel Montes-Rojas 0 0 5 Motivation: Regression discontinuity (Angrist&Pischke) Outcome.5 1 1.5 A. Linear E[Y 0i X i] 0.2.4.6.8 1 X Outcome.5 1 1.5 B. Nonlinear E[Y 0i X i] i 0.2.4.6.8 1 X utcome.5 1 1.5 C. Nonlinearity

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.2 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.1 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions

SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

FORMULATION OF THE LEARNING PROBLEM

FORMULATION OF THE LEARNING PROBLEM FORMULTION OF THE LERNING PROBLEM MIM RGINSKY Now that we have seen an informal statement of the learning problem, as well as acquired some technical tools in the form of concentration inequalities, we

More information

Minimax Estimation of a nonlinear functional on a structured high-dimensional model

Minimax Estimation of a nonlinear functional on a structured high-dimensional model Minimax Estimation of a nonlinear functional on a structured high-dimensional model Eric Tchetgen Tchetgen Professor of Biostatistics and Epidemiologic Methods, Harvard U. (Minimax ) 1 / 38 Outline Heuristics

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model. Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model By Michael Levine Purdue University Technical Report #14-03 Department of

More information

Time Series and Forecasting Lecture 4 NonLinear Time Series

Time Series and Forecasting Lecture 4 NonLinear Time Series Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Can we do statistical inference in a non-asymptotic way? 1

Can we do statistical inference in a non-asymptotic way? 1 Can we do statistical inference in a non-asymptotic way? 1 Guang Cheng 2 Statistics@Purdue www.science.purdue.edu/bigdata/ ONR Review Meeting@Duke Oct 11, 2017 1 Acknowledge NSF, ONR and Simons Foundation.

More information

Nonparametric Drift Estimation for Stochastic Differential Equations

Nonparametric Drift Estimation for Stochastic Differential Equations Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) Yale, May 2 2011 p. 1/35 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite).

More information

Recall the Basics of Hypothesis Testing

Recall the Basics of Hypothesis Testing Recall the Basics of Hypothesis Testing The level of significance α, (size of test) is defined as the probability of X falling in w (rejecting H 0 ) when H 0 is true: P(X w H 0 ) = α. H 0 TRUE H 1 TRUE

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

Smooth simultaneous confidence bands for cumulative distribution functions

Smooth simultaneous confidence bands for cumulative distribution functions Journal of Nonparametric Statistics, 2013 Vol. 25, No. 2, 395 407, http://dx.doi.org/10.1080/10485252.2012.759219 Smooth simultaneous confidence bands for cumulative distribution functions Jiangyan Wang

More information

Optimal global rates of convergence for interpolation problems with random design

Optimal global rates of convergence for interpolation problems with random design Optimal global rates of convergence for interpolation problems with random design Michael Kohler 1 and Adam Krzyżak 2, 1 Fachbereich Mathematik, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed

Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Lecture 35: December The fundamental statistical distances

Lecture 35: December The fundamental statistical distances 36-705: Intermediate Statistics Fall 207 Lecturer: Siva Balakrishnan Lecture 35: December 4 Today we will discuss distances and metrics between distributions that are useful in statistics. I will be lose

More information

ECO Class 6 Nonparametric Econometrics

ECO Class 6 Nonparametric Econometrics ECO 523 - Class 6 Nonparametric Econometrics Carolina Caetano Contents 1 Nonparametric instrumental variable regression 1 2 Nonparametric Estimation of Average Treatment Effects 3 2.1 Asymptotic results................................

More information

A tailor made nonparametric density estimate

A tailor made nonparametric density estimate A tailor made nonparametric density estimate Daniel Carando 1, Ricardo Fraiman 2 and Pablo Groisman 1 1 Universidad de Buenos Aires 2 Universidad de San Andrés School and Workshop on Probability Theory

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Density estimators for the convolution of discrete and continuous random variables

Density estimators for the convolution of discrete and continuous random variables Density estimators for the convolution of discrete and continuous random variables Ursula U Müller Texas A&M University Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract

More information

41903: Introduction to Nonparametrics

41903: Introduction to Nonparametrics 41903: Notes 5 Introduction Nonparametrics fundamentally about fitting flexible models: want model that is flexible enough to accommodate important patterns but not so flexible it overspecializes to specific

More information

The exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case

The exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case The exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case Konstantin Borovkov and Zbigniew Palmowski Abstract For a multivariate Lévy process satisfying

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem Definition: Lévy Process Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes David Applebaum Probability and Statistics Department, University of Sheffield, UK July

More information

A General Overview of Parametric Estimation and Inference Techniques.

A General Overview of Parametric Estimation and Inference Techniques. A General Overview of Parametric Estimation and Inference Techniques. Moulinath Banerjee University of Michigan September 11, 2012 The object of statistical inference is to glean information about an underlying

More information

UNIVERSITÄT POTSDAM Institut für Mathematik

UNIVERSITÄT POTSDAM Institut für Mathematik UNIVERSITÄT POTSDAM Institut für Mathematik Testing the Acceleration Function in Life Time Models Hannelore Liero Matthias Liero Mathematische Statistik und Wahrscheinlichkeitstheorie Universität Potsdam

More information

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES P. M. Bleher (1) and P. Major (2) (1) Keldysh Institute of Applied Mathematics of the Soviet Academy of Sciences Moscow (2)

More information

Asymptotics and Simulation of Heavy-Tailed Processes

Asymptotics and Simulation of Heavy-Tailed Processes Asymptotics and Simulation of Heavy-Tailed Processes Department of Mathematics Stockholm, Sweden Workshop on Heavy-tailed Distributions and Extreme Value Theory ISI Kolkata January 14-17, 2013 Outline

More information

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan Background: Global Optimization and Gaussian Processes The Geometry of Gaussian Processes and the Chaining Trick Algorithm

More information

Nonparametric estimation of extreme risks from heavy-tailed distributions

Nonparametric estimation of extreme risks from heavy-tailed distributions Nonparametric estimation of extreme risks from heavy-tailed distributions Laurent GARDES joint work with Jonathan EL METHNI & Stéphane GIRARD December 2013 1 Introduction to risk measures 2 Introduction

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/32 Part II 2) Adaptation and oracle inequalities YES, Eurandom, 10 October 2011

More information

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Statistica Sinica 19 (2009), 71-81 SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Song Xi Chen 1,2 and Chiu Min Wong 3 1 Iowa State University, 2 Peking University and

More information

Optimal Estimation of a Nonsmooth Functional

Optimal Estimation of a Nonsmooth Functional Optimal Estimation of a Nonsmooth Functional T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania http://stat.wharton.upenn.edu/ tcai Joint work with Mark Low 1 Question Suppose

More information

Lecture 3: Statistical Decision Theory (Part II)

Lecture 3: Statistical Decision Theory (Part II) Lecture 3: Statistical Decision Theory (Part II) Hao Helen Zhang Hao Helen Zhang Lecture 3: Statistical Decision Theory (Part II) 1 / 27 Outline of This Note Part I: Statistics Decision Theory (Classical

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias José E. Figueroa-López 1 1 Department of Statistics Purdue University Seoul National University & Ajou University

More information

Outline of Fourier Series: Math 201B

Outline of Fourier Series: Math 201B Outline of Fourier Series: Math 201B February 24, 2011 1 Functions and convolutions 1.1 Periodic functions Periodic functions. Let = R/(2πZ) denote the circle, or onedimensional torus. A function f : C

More information

Model-free prediction intervals for regression and autoregression. Dimitris N. Politis University of California, San Diego

Model-free prediction intervals for regression and autoregression. Dimitris N. Politis University of California, San Diego Model-free prediction intervals for regression and autoregression Dimitris N. Politis University of California, San Diego To explain or to predict? Models are indispensable for exploring/utilizing relationships

More information

An introduction to Lévy processes

An introduction to Lévy processes with financial modelling in mind Department of Statistics, University of Oxford 27 May 2008 1 Motivation 2 3 General modelling with Lévy processes Modelling financial price processes Quadratic variation

More information

LAN property for sde s with additive fractional noise and continuous time observation

LAN property for sde s with additive fractional noise and continuous time observation LAN property for sde s with additive fractional noise and continuous time observation Eulalia Nualart (Universitat Pompeu Fabra, Barcelona) joint work with Samy Tindel (Purdue University) Vlad s 6th birthday,

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32 Lévy Processes and Infinitely Divisible Measures in the Dual of a Nuclear Space David Applebaum School of Mathematics and Statistics, University of Sheffield, UK Talk at "Workshop on Infinite Dimensional

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

CONCEPT OF DENSITY FOR FUNCTIONAL DATA

CONCEPT OF DENSITY FOR FUNCTIONAL DATA CONCEPT OF DENSITY FOR FUNCTIONAL DATA AURORE DELAIGLE U MELBOURNE & U BRISTOL PETER HALL U MELBOURNE & UC DAVIS 1 CONCEPT OF DENSITY IN FUNCTIONAL DATA ANALYSIS The notion of probability density for a

More information

Max stable Processes & Random Fields: Representations, Models, and Prediction

Max stable Processes & Random Fields: Representations, Models, and Prediction Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries

More information

Nonparametric Methods

Nonparametric Methods Nonparametric Methods Michael R. Roberts Department of Finance The Wharton School University of Pennsylvania July 28, 2009 Michael R. Roberts Nonparametric Methods 1/42 Overview Great for data analysis

More information

Fast learning rates for plug-in classifiers under the margin condition

Fast learning rates for plug-in classifiers under the margin condition Fast learning rates for plug-in classifiers under the margin condition Jean-Yves Audibert 1 Alexandre B. Tsybakov 2 1 Certis ParisTech - Ecole des Ponts, France 2 LPMA Université Pierre et Marie Curie,

More information

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin The Annals of Statistics 1996, Vol. 4, No. 6, 399 430 ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE By Michael Nussbaum Weierstrass Institute, Berlin Signal recovery in Gaussian

More information

12 - Nonparametric Density Estimation

12 - Nonparametric Density Estimation ST 697 Fall 2017 1/49 12 - Nonparametric Density Estimation ST 697 Fall 2017 University of Alabama Density Review ST 697 Fall 2017 2/49 Continuous Random Variables ST 697 Fall 2017 3/49 1.0 0.8 F(x) 0.6

More information

Gaussian Random Fields: Excursion Probabilities

Gaussian Random Fields: Excursion Probabilities Gaussian Random Fields: Excursion Probabilities Yimin Xiao Michigan State University Lecture 5 Excursion Probabilities 1 Some classical results on excursion probabilities A large deviation result Upper

More information

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM

ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM c 2007-2016 by Armand M. Makowski 1 ENEE 621 SPRING 2016 DETECTION AND ESTIMATION THEORY THE PARAMETER ESTIMATION PROBLEM 1 The basic setting Throughout, p, q and k are positive integers. The setup With

More information

Gaussian Random Fields: Geometric Properties and Extremes

Gaussian Random Fields: Geometric Properties and Extremes Gaussian Random Fields: Geometric Properties and Extremes Yimin Xiao Michigan State University Outline Lecture 1: Gaussian random fields and their regularity Lecture 2: Hausdorff dimension results and

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

Local Polynomial Modelling and Its Applications

Local Polynomial Modelling and Its Applications Local Polynomial Modelling and Its Applications J. Fan Department of Statistics University of North Carolina Chapel Hill, USA and I. Gijbels Institute of Statistics Catholic University oflouvain Louvain-la-Neuve,

More information

The Lévy-Itô decomposition and the Lévy-Khintchine formula in31 themarch dual of 2014 a nuclear 1 space. / 20

The Lévy-Itô decomposition and the Lévy-Khintchine formula in31 themarch dual of 2014 a nuclear 1 space. / 20 The Lévy-Itô decomposition and the Lévy-Khintchine formula in the dual of a nuclear space. Christian Fonseca-Mora School of Mathematics and Statistics, University of Sheffield, UK Talk at "Stochastic Processes

More information

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

Statistical Measures of Uncertainty in Inverse Problems

Statistical Measures of Uncertainty in Inverse Problems Statistical Measures of Uncertainty in Inverse Problems Workshop on Uncertainty in Inverse Problems Institute for Mathematics and Its Applications Minneapolis, MN 19-26 April 2002 P.B. Stark Department

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

Persistent homology and nonparametric regression

Persistent homology and nonparametric regression Cleveland State University March 10, 2009, BIRS: Data Analysis using Computational Topology and Geometric Statistics joint work with Gunnar Carlsson (Stanford), Moo Chung (Wisconsin Madison), Peter Kim

More information

Spatially Adaptive Smoothing Splines

Spatially Adaptive Smoothing Splines Spatially Adaptive Smoothing Splines Paul Speckman University of Missouri-Columbia speckman@statmissouriedu September 11, 23 Banff 9/7/3 Ordinary Simple Spline Smoothing Observe y i = f(t i ) + ε i, =

More information

ON THE COMPOUND POISSON DISTRIBUTION

ON THE COMPOUND POISSON DISTRIBUTION Acta Sci. Math. Szeged) 18 1957), pp. 23 28. ON THE COMPOUND POISSON DISTRIBUTION András Préopa Budapest) Received: March 1, 1957 A probability distribution is called a compound Poisson distribution if

More information

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals Acta Applicandae Mathematicae 78: 145 154, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 145 Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals M.

More information

arxiv: v2 [math.st] 29 May 2008

arxiv: v2 [math.st] 29 May 2008 Nonparametric estimation for Lévy processes from low-frequency observations arxiv:0709.2007v2 [math.st] 29 May 2008 Michael H. Neumann Friedrich-Schiller-Universität Jena Institut für Stochastik Ernst-Abbe-Platz

More information

Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri

Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri University of Bergamo (Italy) ilia.negri@unibg.it SAPS VIII, Le Mans 21-24 March,

More information

Estimation of the functional Weibull-tail coefficient

Estimation of the functional Weibull-tail coefficient 1/ 29 Estimation of the functional Weibull-tail coefficient Stéphane Girard Inria Grenoble Rhône-Alpes & LJK, France http://mistis.inrialpes.fr/people/girard/ June 2016 joint work with Laurent Gardes,

More information

University of California, Berkeley NSF Grant DMS

University of California, Berkeley NSF Grant DMS On the standard asymptotic confidence ellipsoids of Wald By L. Le Cam University of California, Berkeley NSF Grant DMS-8701426 1. Introduction. In his famous paper of 1943 Wald proved asymptotic optimality

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Sparse Nonparametric Density Estimation in High Dimensions Using the Rodeo

Sparse Nonparametric Density Estimation in High Dimensions Using the Rodeo Outline in High Dimensions Using the Rodeo Han Liu 1,2 John Lafferty 2,3 Larry Wasserman 1,2 1 Statistics Department, 2 Machine Learning Department, 3 Computer Science Department, Carnegie Mellon University

More information

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS

NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS Fixed Point Theory, Volume 9, No. 1, 28, 3-16 http://www.math.ubbcluj.ro/ nodeacj/sfptcj.html NONTRIVIAL SOLUTIONS TO INTEGRAL AND DIFFERENTIAL EQUATIONS GIOVANNI ANELLO Department of Mathematics University

More information

Concentration behavior of the penalized least squares estimator

Concentration behavior of the penalized least squares estimator Concentration behavior of the penalized least squares estimator Penalized least squares behavior arxiv:1511.08698v2 [math.st] 19 Oct 2016 Alan Muro and Sara van de Geer {muro,geer}@stat.math.ethz.ch Seminar

More information