Statistical Skorohod embedding problem: optimality and asymptotic normality

Size: px
Start display at page:

Download "Statistical Skorohod embedding problem: optimality and asymptotic normality"

Transcription

1 Statistical Skorohod embedding problem: optimality and asymptotic normality Denis Belomestny and John Schoenmakers March 4, 5 Abstract Given a Brownian motion B, we consider the so-called statistical Skorohod embedding problem of recovering the distribution of an independent random time T based on i.i.d. sample from B T. Our approach is based on the genuine use of the Mellin transform. We propose a consistent estimator for the density of T, derive its convergence rates and prove their optimality. Moreover we address the question of asymptotic normality. Keywords: Skorohod embedding problem, Mellin transform, multiplicative deconvolution, volatility density estimation. Introduction The so called Skorohod embedding SE) problem or Skorohod stopping problem was first stated and solved by Skorohod in 96. This problem can be formulated as follows. Problem. Skorohod Embedding Problem). For a given probability measure µ on R, such that x dµx) < and xdµx), find a stopping time T such that BT µ and B T t is a uniformly integrable martingale. The SE problem has recently drawn much attention in the literature, see e.g. Ob lój, [3], where the list of references consists of more than items. In fact, there is no unique solution to the SE problem and there are currently more than different solutions available. This means that from a statistical point of view, the SE problem is not well posed. In this paper we study what we call statistical Skorohod embedding SSE) problem. Problem. Statistical Skorohod Embedding Problem). Based on i.i.d. sample X,..., X n from the distribution of B T consistently estimate the distribution of the random time T, where B and T are assumed to be independent. The independence of B and T is needed to ensure the identifiability of the distribution of T from the distribution of B T. In fact, due to the well-known scaling properties of the Brownian motion, the SSE problem is closely related to the multiplicative deconvolution problem and the problem of volatility estimation see, e.g. Van Es et al [7], Van Es et al [6] and Van Es et al [5]). A standard approach to such type of problems is to first make a log-transformation and then solve the resulting additive deconvolution problem by means of the standard kernel density deconvolution technique. Our approach is different D. Belomestny, Duisburg-Essen University Thea-Leymann-Str. 9, D-457 Essen, Germany, denis.belomestny@unidue.de Partially supported by the Deutsche Forschungsgemeinschaft through the SFB 83 Statistical modeling of nonlinear dynamic processes Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstr. 39, 7 Berlin, Germany, schoenma@wiasberlin.de

2 and makes use of the Mellin transform device, which, in view of the well known properties of the Mellin transform, seems to be appropriate here. We construct a consistent estimator for the density of T and derive its convergence rates in different norms. Furthermore, we show that the obtained rates are optimal in minimax sense. The asymptotic normality of the proposed estimator is addressed as well. Let us stress that the results on optimality of the rates and asymptotic normality can not be derived from the known results on additive deconvolution problems. The paper is organised as follows. In Section we discuss some properties of the Mellin transform and introduce our main estimation procedure. Section 3 is devoted to the convergence properties of the estimator constructed in the previous section. In particular, we prove upper bounds for the pointwise estimation risk and show that these upper bounds are optimal in minimax sense. In Section 4 asymptotic normality of the proposed estimator is analysed. Some numerical examples can be found in Section 5. Finally, all proofs are collected in Section 6. Construction of the estimator for p T Let B be a Brownian motion and let a random variable T be independent of B. We then have, ) X : B T T B and the problem of reconstructing T is related to a multiplicative deconvolution problem. While for additive deconvolution problems the Fourier transform plays an important role, here we can conveniently use the Mellin transform. Definition.. Let ξ be a non-negative random variable with a probability density p ξ, then the Mellin transform of p ξ is defined via ) M[p ξ ]z) : E[ξ z ] for all z S ξ with S ξ { z C : E[ξ Rez ] < }. p ξ x)x z dx Since p ξ is a density, it is integrable and so at least {z C : Rez) } S ξ. Under mild assumptions on the growth of p ξ near the origin, one obtains {z C : a ξ < Rez) < b ξ } S ξ for some a ξ < b ξ. Then the Mellin transform ) exists and is analytic in the strip a ξ < Re z < b ξ. For example, if p ξ is essentially bounded in a right-hand neighbourhood of zero, we may take a ξ and b ξ. The role of the Mellin transform in probability theory is mainly related to the product of independent random variables: in fact it is well-known that the probability density of the product of two independent random variables is given by the Mellin convolution of the two corresponding densities. Due to ), the SSE problem is closely connected to the Mellin convolution. Suppose that the random time T has a density p T which is essentially bounded in a right-hand neighbourhood of zero and fulfills 3) x δ p T x) dx < for some δ >. Since S B {z C : Rez) > }, we derive for < Rez), M[p X ]z) E [ B z ] E [ T z )/] M[p B ]z)m[p T ]z + )/) z )/ π Γz/)M[p T ]z + )/).

3 As a result and the Mellin inversion formula yields 4) p T x) π π M[p X ]z ) M[p T ]z) z, / < Rez) Γz /) γ+i γ i π x γ iv M[p T ]γ + iv) dv x γ iv M[p X ] γ + iv) ) γ+iv Γγ + iv /) dv for / < γ, x >. Furthermore, the Mellin transform of p X can be directly estimated from the data X,..., X n via the empirical Mellin transform: 5) M n [p X ]z) : n n X k z, / < Rez), k where the condition Rez) > / guarantees that the variance of the estimator 5) is finite. Note however that the integral in 4) may fail to exist if we replace M[p X ] by M n [p X ]. We so need to regularize the inverse Mellin operator. To this end, let us consider a kernel K ) supported on [, ] and a sequence of bandwidths h n > tending to as n. Then we define, in view of 5), for some 3/4 < γ, 6) p n,γ x) : π 3 Convergence x γ iv Kvh n ) M n[p X ]γ + iv) ) γ+iv Γγ / + iv) For our convergence analysis, we will henceforth take the simplest kernel Ky) [,] y), but note that in principle other kernels may be considered as well. The next theorem states that p n,γ converges to p T at a polynomial rate, provided the Mellin transform of p T decays exponentially fast. We shall use throughout the notation a n b n if a n is bounded by a constant multiple of b n, independently of the parameters involved, that is, in the Landau notation a n Ob n ), n. Theorem 3.. For any β > and L >, introduce the class of functions { } Cβ, L) : f : M[f] + iv) e β v dv L. Assume that p T Cβ, L) for some β > and L >, and that 3) holds. Then for some constant C L depending on L only, it holds [ {x pt 7) sup E x) p n, x) } ] C L [e β/hn + ] x n eπ/hn. By next choosing 8) h n π + β)/ log n we arrive at the rate [ {x 9) sup E pt x) p n, x) } ] n β x as n. π+β dv. 3

4 Cβ, L) Dβ, L) n β π+β log β n) Table : Minimax rates of convergence for the classes Cβ, L) and Dβ, L). Let us turn now to some examples. Example 3.. Consider the class of Gamma densities for α >. Since p T x; α) xα e x, x Γα) M[p T ]z) Γz + α ), Rez) >, Γα) we derive that p T Cβ, L) for all < β < π/ and some L Lβ) due to the asymptotic properties of the Gamma function see Lemma 6.5 in Appendix). As a result, Theorem 3. implies for any ρ < /4. sup x E[ {x pt x) p n, x) } ] n ρ, n If M[p T ] decays polynomially fast, we get the following result. Theorem 3.3. Consider the class of functions { } Dβ, L) f : M[f] + iv) + v β ) dv L, and assume that p T Dβ, L) for some β > and L >. If 3) holds for some δ >, then for some constant D L, [ {x ) sup E pt x) p n, x) } ] [ D L h β n + ]. x n eπ/hn By choosing ) h n we arrive at ) sup x π log n β log log n, E[ {x pt x) p n, x) } ] log β n), n. The rates of Theorem 3. and Theorem 3.3 summarized in Table are in fact optimal up to a logarithmic factor) in minimax sense for the classes Cβ, L) and Dβ, L), respectively. 4

5 Theorem 3.4. Fix some β >. There are ε > and x > such that lim inf sup inf n p n p T Cβ,L) lim inf sup inf n p n p T Dβ,L) P n p T p T x) p n x) ε n β π+β log ρ n) ) P n p T p T x) p n x) ε log β n) >, ) >, for some ρ >, where the infimum is taken over all estimators i.e. all measurable functions of X,..., X n ) of p T and P n p T is the distribution of the i.i.d. sample X,..., X n with X W T and T p T. 4 Asymptotic normality In the case of Kv) [,] v), the estimate p n,γ x) can be written as where p n,γ x) : π /hn n Z n,k : The following theorem holds /h n n Z n,k, k /hn [ n ] n X k γ+iv ) x γ iv γ+iv Γγ / + iv) dv k x γ iv X k γ+iv ) /h n γ+iv dv dv. Γγ / + iv) Theorem 4.. Suppose that for some γ >, d du Γγ 3/ + iu)m[p T ]γ + iu)), u and M[p T ]γ + iu) du <, then ρ n pn,γ x) E[p n,γ x)] ) D N, σ ) for some σ >, where ρ n n / h γ ) n log /h n ) exp [π/h n ] and h n c log n) for some c >. 5 Numerical examples Barndorff-Nielsen et al. [] consider a class of variance-mean mixtures of normal distributions which they call generalized hyperbolic distributions. The univariate and symmetric members of this family appear as normal scale mixtures whose mixing distribution is the generalized inverse Gaussian distribution with density 3) p T v) κ/δ)λ K λ δκ) vλ exp )) κ v + δ, v >, v 5

6 x 5 5 Figure : Left: the Gamma density red) and its 5 estimates grey) for the sample size n. Right: the box plots of the loss sup x [,] { pn,γ x) p T x) } for different sample sizes. for some κ, δ and λ >, where K is a modified Bessel function. The resulting normal scale mixture has probability density function p X x) κ/ π) / δ λ K λδκ)δ + x ) λ ) Kλ κδ + x ) /). Let us start with a simple example, Gamma density p T x) x exp x), x, which is a special case of 3) for δ, λ and κ. We simulate a sample of size n from the distribution of X, and construct the estimate 6) with the bandwidth h n given up to a constant not depending on n) by 8) and γ.9. In Figure left), one can see 5 estimated densities based on 5 independent samples from W T of size n, together with p T in red. Next we estimate the distribution of the loss sup x [,] { pn,γ x) p T x) } based on independent repetitions of the estimation procedure. The corresponding box plots for different n are shown in Figure right). 6 Proofs 6. Proof of Theorem 3. First let us estimate the bias of p n,γ. We have E[p n, x)] π π /hn x iv Kvh n ) M[p X ] + iv) ) +iv Γ/ + iv) /h n x iv M[p T ] + iv) dv. dv Hence p T x) E[p n, x)] M[p T ] + iv)x iv dv π { v /h n} 6

7 and we then have the estimate, { sup x E[pn, x)] p T x) } x π e β/hn π { v /h n} { v /h n} M[p T ] + iv) dv e β v M[p T ] + iv) e β v dv 4) L e β/hn π. As to the variance, by the simple inequality Var f t dt ) Var[ft ]dt), which holds for any random function f t with E[f t ]dt <, we get [ Var[x p n, x)] Var π x iv Kvh n ) M ] n[p X ] + iv) ) +iv dv Γ/ + iv) /hn Var M n [p X ] + iv) ) ) π dv Γ/ + iv) nπ /h n /hn /h n Var X iv) Γ/ + iv) dv 5) [ ] /hn nπ /h n Γ/ + iv) dv. We obtain from 5) due to Corollary 6.6 see Appendix), Var[x p n,γ x)] C L n eπ/hn and so 7) follows. Finally, by plugging 8) into 7) we get 9) and the proof is finished. 6. Proof of Theorem 3.3 The proof is analog to the one of Theorem 3., the only difference is the bias estimate 4) that now becomes { sup x E[pn, x)] p T x) } L x π hβ n. With the choice ) we obtain the logarithmic rate ). 6.3 Proof of Theorem 3.4 Our construction relies on the following basic result see [4] for the proof). Theorem 6.. Suppose that for some ε > and all n N there are two densities p,n, p,n G for some class G such that dp,n, p,n ) > εv n, where d is some metric on G. If the observations in model n follow the product law P p,n P n p density p G and χ p,n x) p,n x)) p,n p,n ) : dx n log + 4δ) ) p,n x) 7 under the

8 holds for some δ, /), then the following lower bound holds for all density estimators ˆp n based on observations from model n: inf sup P n ) p dˆpn, p) εv n δ. ˆp n p G If the above holds for fixed ε, δ > and all n N, then the optimal rate of convergence in a minimax sense over G is not faster than v n. We only present the proof of a lower bound for the class Cβ, L). The proof for the class Cβ, L) is similar. Let us start with the construction of the densities p,n and p,n. Define for any ν > and M > two auxiliary functions and qx) ν sinπ/ν) π ρ M x) e log x) π + x ν, x sinm logx)), x. x The properties of the functions q and ρ M are collected in the following lemma. Lemma 6.. The function q is a probability density on R + with the Mellin transform M[q]z) sinπ/ν), Re[z] >. sinπz/ν) The Mellin transform of the function ρ M is given by 6) M[ρ M ]u + iv) [ e u +iv+m)) / e u +iv M)) / ]. Hence Set now for any M > ρ M x)dx M[ρ M ]). q,m x) : qx), q,m x) : qx) + q ρ M )x), where f g stands for the multiplicative convolution of two functions f and g on R + defined as f g)x) : ft)gx/t) dt, x. t The following lemma describes some properties of q,m and q,m. Lemma 6.3. For any M > the function q,m is a probability density satisfying q,m q,m sup x R + q,m x) q,m x) exp Mπ/ν), M. Moreover, q,m and q,m are in Cβ, L) for all < β < π/ν and L depending on γ. Proof. First note that q,m x)dx + q ρ M )x) + M[q])M[ρ M ]). 8

9 Furthermore, due to the Parseval identity q ρ M )y) e log x) π e logy) sinm logx)) x + y/x) ν dx e v e v sinmv) π + e dv νv y l) logy) e π logy) e π where Rx) ex +e and Hx) e x /. Note that νx F[R]u) Hence due to 9) e x+iux + e νx dx ν e v e y l v) sinmv) π + e νlogy) v) dv e v e logy) v) sinmv) π + e νlogy) v) dv e iu logy) [ Hu + M) Hu M) e v/ν+iuv/ν + e v dx ν Γ + iu ν ] F[R]u)du, ) Γ + iu ). ν sup q,m y) q,m y) sup q ρ M )y) exp Mπ/ν), M. y R + y R + The second statement of the lemma follows from Lemma 6. and the fact that M[q ρ M ] M[q]M[ρ M ]. Let T,M and T,M be two random variables with densities q,m and q,m, respectively. Then the density of the r.v. W Ti,M, i,, is given by p i,m x) : π For the Mellin transform of p i,m we get M[p i,m ]z) E [ W z ] E [ T z )/ i,m λ / e x λ qi,m λ) dλ i,. E [ W z ] M[q i,m ]z + )/) ] 7) z/ π Γz/)M[q i,m ]z + )/), i,. Lemma 6.4. The χ -distance between the densities p,m and p,m fulfills χ p,m x) p,m x)) p,m p,m ) dx e Mπ+/ν), M. p,m x) Fix some κ, /). Due to Lemma 6.4, the inequality holds for M large enough, provided M nχ p,m p,m ) κ + ε logn) + ν ) log logn)) π + /ν) for arbitrary small ε >. Hence Lemma 6.3 and Theorem 6. imply inf ˆp n sup p Cβ,L) P p,n ˆpn p cv n ) δ. for any β < π/ν < π, some constants c >, δ > and v n n β/π+β) log π β π+β n). 9

10 6.4 Proof of Proposition 4. Thus in order to prove asymptotic normality of np n,γ x) E[p n,γ x)]), we need to check the Lyapounov condition, i.e. for some δ > E Z n, EZ n, +δ, n. n δ/ [VarZ n, )] +δ/ For this we need a lower bound for VarZ n, ). Since EZ n, x π) p T x), it suffices to show that VarZ n, ) h a n expbh n ) and E Z n, +δ h c n exp + δ)bh n ) for some positive constants a, b and c. Then Lyapounov s condition holds for any δ > by choosing h n clog n)). We have VarZ n, ) /hn /hn x γ iv u) Cov X γ+iv ), X γ+iu )) du dv π γ+iv u) γ+iv u) Γγ / + iv)γγ / iu) π /h n /h n /hn /hn π /h n /hn /h n x) γ )+iv u) M[p X ]γ + iv) /h n x) γ+iv ) Γγ / + iv) M[p X ]4γ 3 + iv u)) dv du Γγ / + iv)γγ / iu) dv R R. Note that /hn ) R M[p X ]γ + iv) x) γ ) /h n Γγ / + iv) dv /hn x) γ ) M[pT ]γ + iv) dv < C < /h n and furthermore R x γ ) π x γ ) π I n. /hn /hn /h n Γγ 3/ + iv u))m[p T ]γ + iv u)) /h n x iv u) Γγ / + iv)γγ / iu) Without loss of generality we may take x for x the proof is similar). Observe that for some constants C >, C >, and that Γγ / + iv) C v + C v > v γ e π v /, Γγ / iu) C u + C u > u γ e π u / Γγ 3/ + iv u)) D u v + u v > D u v γ ) e π u v / for some D >, D >. Using these estimate, one can easily derive that the integral I,n,ρ : /hn /hn /h n can be bounded from above as Γγ 3/ + iv u))m[p T ]γ + iv u)) v u ρ /h n Γγ / + iv)γγ / iu) dv du I,n,ρ h 3 γ n e π ρ/) ) hn + hn γ e π ρ/) hn + h 4 γ n e π ρ ) hn dv du

11 for n. Similarly and /hn /hn /h n /hn /hn /h n Γγ 3/ + iv u))m[p T ]γ + iv u)) u hn /h ρ v u ρ dv du n Γγ / + iv)γγ / iu) O h l n e π ρ)) hn Γγ 3/ + iv u))m[p T ]γ + iv u)) v hn /h ρ v u ρ dv du n Γγ / + iv)γγ / iu) O h l n e π ρ)) hn for some l >. Hence I,n,ρ : /hn /hn /h n /hn /hn /h n Γγ 3/ + iv u))m[p T ]γ + iv u)) v u ρ dv du /h n Γγ / + iv)γγ / iu) /h n u hn ρ v hn ρ v u ρ Γγ 3/ + iv u))m[p T ]γ + iv u)) Γγ / + iv)γγ / iu) I 3,n,ρ + O h l n e π ρ)) hn. dv du + O h l n e π ρ)) hn Now let us study the asymptotic behaviour of the integral I 3,n,ρ. To this end, we will use the Stirling formulas Γγ / + iv) γ / + iv) γ +iv e γ+/ iv π + O v )), Γγ / iu) γ / iu) γ iu e γ+/+iu π + O u )). First consider the case u, v +, where Γγ / + iv)γγ / iu) π exp [iv log v iu log u i v u)] exp [ π ] u + v) + γ ) log v + log u) + O/u) + O/v)). Let ρ n h α n for < α < /. Then on the set { u } { ρ n v } ρ n { v u ρ n } {v, u } h n h n we define u /h n r, v /h n s with < r, s < ρ n, r s < ρ n to get i/hn s) log/hn s) i/hn r) log/hn r) ir s) Γγ / + iv)γγ / iu) e Using the asymptotic decomposition hn γ ) exp [ π/h n ] exp [r + s)π] + Oh n )) + Oρ n h n )). /h n s) log /h n s) /h n r) log /h n r) r s) r s) log /h n ) + Oρ nh n ),

12 we derive Γγ / + iv)γγ / iu) πhn γ ) exp [ π/h n ] exp [r + s)π] exp [i r s) log /h n )] + Oρ nh n )). Analogously, on the set { u } { ρ n v } ρ n { v u ρ n } {v, u } h n h n we define u /h n + r, v /h n + s, with < r, s < ρ n, r s < ρ n, to get Γγ / + iv)γγ / iu) πhn γ ) exp [ π/h n ] exp [r + s)π] exp [ i r s) log /h n )] + Oρ nh n )). Hence the integral I 3,n,ρn can decomposed as follows where with I 4,n,ρn I 3,n,ρn : hγ ) n π exp [π/h n ] { Re[I 4,n,ρn ] + Oρ nh n ) }, r ρn s ρn r s ρn exp [ r + s)π] Γγ 3/ + ir s)) M[p T ]γ + ir s)) exp [i s r) log /h n )] drds ρn e vπ R n v)dv R n v) u ρn ve uπ Γγ 3/ + iu)m[p T ]γ + iu)e iu log/hn) du. Using the well known results on the asymptotic of Fourier transform, it is easy to show that R n v) e iπ/ Γγ 3/)M[p T ]γ ) log /h n ) [ ] d +e iπ du Γγ 3/ + iu)m[p T ]γ + iu)) log /h n ) + Olog 3 /h n )) u uniformly in v. As a result [ ] d Re[I 4,n,ρn ] du Γγ 3/ + iu)m[p T ]γ + iu)) log /h n ) + Olog 3 /h n )). u Combining all above estimates, we finally get 8) VarZ n, ) hγ ) n π log /h n ) exp [π/h n ] {[ ] d du Γγ 3/ + iu)m[p T ]γ + iu)) u )} +Olog /h n )) + Oρ nh n log /h n )) + O e πρn/ log /h n ). Using the decomposition 8), the Lyapounov condition for some δ > E Z n, EZ n, +δ, n δ/ [VarZ n, )] n +δ/ is easy to verify, since EZ n, p T x).

13 Lemma 6.5. For any α, there exist positive constants C and C α) such that uniformly for β, 9) C β α / e β π/ Γα + iβ) C α β α / e β π/. Corollary 6.6. For all < α < / and all U >, it holds ) U U dβ Γα + iβ) CU / α e Uπ/ for a constant C >. For α > /, we have ) U U dβ Γα + iβ) C α) + C e Uπ/ where C does not depend on α. References [] O Barndorff-Nielsen, John Kent, and Michael Sørensen. Normal variance-mean mixtures and z distributions. International Statistical Review, 5):45 59, 98. [] Alexander Meister. Deconvolution problems in nonparametric statistics, volume 93. Springer, 9. [3] Jan Ob lój et al. The Skorokhod embedding problem and its offspring. Probability Surveys, :3 39, 4. [4] Alexandre B Tsybakov. Introduction to nonparametric estimation, volume. Springer, 9. [5] Bert Van Es, Shota Gugushvili, Peter Spreij, et al. A kernel type nonparametric density estimator for decompounding. Bernoulli, 33):67 694, 7. [6] Bert Van Es and Peter Spreij. Estimation of a multivariate stochastic volatility density by kernel deconvolution. Journal of Multivariate Analysis, 3): ,. [7] Bert Van Es, Peter Spreij, and Harry Van Zanten. Nonparametric volatility density estimation. Bernoulli, 93):45 465, 3. 3

Generalized Post-Widder inversion formula with application to statistics

Generalized Post-Widder inversion formula with application to statistics arxiv:5.9298v [math.st] 3 ov 25 Generalized Post-Widder inversion formula with application to statistics Denis Belomestny, Hilmar Mai 2, John Schoenmakers 3 August 24, 28 Abstract In this work we derive

More information

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes

Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Pointwise convergence rates and central limit theorems for kernel density estimators in linear processes Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract Convergence

More information

Strong approximation for additive functionals of geometrically ergodic Markov chains

Strong approximation for additive functionals of geometrically ergodic Markov chains Strong approximation for additive functionals of geometrically ergodic Markov chains Florence Merlevède Joint work with E. Rio Université Paris-Est-Marne-La-Vallée (UPEM) Cincinnati Symposium on Probability

More information

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model. Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model By Michael Levine Purdue University Technical Report #14-03 Department of

More information

Optimal global rates of convergence for interpolation problems with random design

Optimal global rates of convergence for interpolation problems with random design Optimal global rates of convergence for interpolation problems with random design Michael Kohler 1 and Adam Krzyżak 2, 1 Fachbereich Mathematik, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Estimation of the characteristics of a Lévy process observed at arbitrary frequency

Estimation of the characteristics of a Lévy process observed at arbitrary frequency Estimation of the characteristics of a Lévy process observed at arbitrary frequency Johanna Kappus Institute of Mathematics Humboldt-Universität zu Berlin kappus@math.hu-berlin.de Markus Reiß Institute

More information

On variable bandwidth kernel density estimation

On variable bandwidth kernel density estimation JSM 04 - Section on Nonparametric Statistics On variable bandwidth kernel density estimation Janet Nakarmi Hailin Sang Abstract In this paper we study the ideal variable bandwidth kernel estimator introduced

More information

Estimation of a quadratic regression functional using the sinc kernel

Estimation of a quadratic regression functional using the sinc kernel Estimation of a quadratic regression functional using the sinc kernel Nicolai Bissantz Hajo Holzmann Institute for Mathematical Stochastics, Georg-August-University Göttingen, Maschmühlenweg 8 10, D-37073

More information

TIKHONOV S REGULARIZATION TO DECONVOLUTION PROBLEM

TIKHONOV S REGULARIZATION TO DECONVOLUTION PROBLEM TIKHONOV S REGULARIZATION TO DECONVOLUTION PROBLEM Dang Duc Trong a, Cao Xuan Phuong b, Truong Trung Tuyen c, and Dinh Ngoc Thanh a a Faculty of Mathematics and Computer Science, Ho Chi Minh City National

More information

Gaussian Random Field: simulation and quantification of the error

Gaussian Random Field: simulation and quantification of the error Gaussian Random Field: simulation and quantification of the error EMSE Workshop 6 November 2017 3 2 1 0-1 -2-3 80 60 40 1 Continuity Separability Proving continuity without separability 2 The stationary

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

41903: Introduction to Nonparametrics

41903: Introduction to Nonparametrics 41903: Notes 5 Introduction Nonparametrics fundamentally about fitting flexible models: want model that is flexible enough to accommodate important patterns but not so flexible it overspecializes to specific

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Nonparametric Econometrics

Nonparametric Econometrics Applied Microeconometrics with Stata Nonparametric Econometrics Spring Term 2011 1 / 37 Contents Introduction The histogram estimator The kernel density estimator Nonparametric regression estimators Semi-

More information

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012 Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 202 BOUNDS AND ASYMPTOTICS FOR FISHER INFORMATION IN THE CENTRAL LIMIT THEOREM

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

MFM Practitioner Module: Quantitative Risk Management. John Dodson. September 23, 2015

MFM Practitioner Module: Quantitative Risk Management. John Dodson. September 23, 2015 MFM Practitioner Module: Quantitative Risk Management September 23, 2015 Mixtures Mixtures Mixtures Definitions For our purposes, A random variable is a quantity whose value is not known to us right now

More information

Density estimators for the convolution of discrete and continuous random variables

Density estimators for the convolution of discrete and continuous random variables Density estimators for the convolution of discrete and continuous random variables Ursula U Müller Texas A&M University Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract

More information

Limit theorems for multipower variation in the presence of jumps

Limit theorems for multipower variation in the presence of jumps Limit theorems for multipower variation in the presence of jumps Ole E. Barndorff-Nielsen Department of Mathematical Sciences, University of Aarhus, Ny Munkegade, DK-8 Aarhus C, Denmark oebn@imf.au.dk

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012

Gaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature

More information

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin The Annals of Statistics 1996, Vol. 4, No. 6, 399 430 ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE By Michael Nussbaum Weierstrass Institute, Berlin Signal recovery in Gaussian

More information

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central

More information

Qualitative Properties of Numerical Approximations of the Heat Equation

Qualitative Properties of Numerical Approximations of the Heat Equation Qualitative Properties of Numerical Approximations of the Heat Equation Liviu Ignat Universidad Autónoma de Madrid, Spain Santiago de Compostela, 21 July 2005 The Heat Equation { ut u = 0 x R, t > 0, u(0,

More information

Estimation of arrival and service rates for M/M/c queue system

Estimation of arrival and service rates for M/M/c queue system Estimation of arrival and service rates for M/M/c queue system Katarína Starinská starinskak@gmail.com Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Tail dependence coefficient of generalized hyperbolic distribution

Tail dependence coefficient of generalized hyperbolic distribution Tail dependence coefficient of generalized hyperbolic distribution Mohalilou Aleiyouka Laboratoire de mathématiques appliquées du Havre Université du Havre Normandie Le Havre France mouhaliloune@gmail.com

More information

Bayesian Regularization

Bayesian Regularization Bayesian Regularization Aad van der Vaart Vrije Universiteit Amsterdam International Congress of Mathematicians Hyderabad, August 2010 Contents Introduction Abstract result Gaussian process priors Co-authors

More information

Inverse Statistical Learning

Inverse Statistical Learning Inverse Statistical Learning Minimax theory, adaptation and algorithm avec (par ordre d apparition) C. Marteau, M. Chichignoud, C. Brunet and S. Souchet Dijon, le 15 janvier 2014 Inverse Statistical Learning

More information

Math 576: Quantitative Risk Management

Math 576: Quantitative Risk Management Math 576: Quantitative Risk Management Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Week 11 Haijun Li Math 576: Quantitative Risk Management Week 11 1 / 21 Outline 1

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

QED. Queen s Economics Department Working Paper No. 1244

QED. Queen s Economics Department Working Paper No. 1244 QED Queen s Economics Department Working Paper No. 1244 A necessary moment condition for the fractional functional central limit theorem Søren Johansen University of Copenhagen and CREATES Morten Ørregaard

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Fourier Transform & Sobolev Spaces

Fourier Transform & Sobolev Spaces Fourier Transform & Sobolev Spaces Michael Reiter, Arthur Schuster Summer Term 2008 Abstract We introduce the concept of weak derivative that allows us to define new interesting Hilbert spaces the Sobolev

More information

STAT 992 Paper Review: Sure Independence Screening in Generalized Linear Models with NP-Dimensionality J.Fan and R.Song

STAT 992 Paper Review: Sure Independence Screening in Generalized Linear Models with NP-Dimensionality J.Fan and R.Song STAT 992 Paper Review: Sure Independence Screening in Generalized Linear Models with NP-Dimensionality J.Fan and R.Song Presenter: Jiwei Zhao Department of Statistics University of Wisconsin Madison April

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

On the number of ways of writing t as a product of factorials

On the number of ways of writing t as a product of factorials On the number of ways of writing t as a product of factorials Daniel M. Kane December 3, 005 Abstract Let N 0 denote the set of non-negative integers. In this paper we prove that lim sup n, m N 0 : n!m!

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

CREATES Research Paper A necessary moment condition for the fractional functional central limit theorem

CREATES Research Paper A necessary moment condition for the fractional functional central limit theorem CREATES Research Paper 2010-70 A necessary moment condition for the fractional functional central limit theorem Søren Johansen and Morten Ørregaard Nielsen School of Economics and Management Aarhus University

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Gaussian Random Fields: Geometric Properties and Extremes

Gaussian Random Fields: Geometric Properties and Extremes Gaussian Random Fields: Geometric Properties and Extremes Yimin Xiao Michigan State University Outline Lecture 1: Gaussian random fields and their regularity Lecture 2: Hausdorff dimension results and

More information

A Bootstrap View on Dickey-Fuller Control Charts for AR(1) Series

A Bootstrap View on Dickey-Fuller Control Charts for AR(1) Series AUSTRIAN JOURNAL OF STATISTICS Volume 35 (26), Number 2&3, 339 346 A Bootstrap View on Dickey-Fuller Control Charts for AR(1) Series Ansgar Steland RWTH Aachen University, Germany Abstract: Dickey-Fuller

More information

Fast learning rates for plug-in classifiers under the margin condition

Fast learning rates for plug-in classifiers under the margin condition Fast learning rates for plug-in classifiers under the margin condition Jean-Yves Audibert 1 Alexandre B. Tsybakov 2 1 Certis ParisTech - Ecole des Ponts, France 2 LPMA Université Pierre et Marie Curie,

More information

Mod-φ convergence I: examples and probabilistic estimates

Mod-φ convergence I: examples and probabilistic estimates Mod-φ convergence I: examples and probabilistic estimates Valentin Féray (joint work with Pierre-Loïc Méliot and Ashkan Nikeghbali) Institut für Mathematik, Universität Zürich Summer school in Villa Volpi,

More information

D I S C U S S I O N P A P E R

D I S C U S S I O N P A P E R I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S ( I S B A ) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 2014/06 Adaptive

More information

RESEARCH REPORT. Estimation of sample spacing in stochastic processes. Anders Rønn-Nielsen, Jon Sporring and Eva B.

RESEARCH REPORT. Estimation of sample spacing in stochastic processes.   Anders Rønn-Nielsen, Jon Sporring and Eva B. CENTRE FOR STOCHASTIC GEOMETRY AND ADVANCED BIOIMAGING www.csgb.dk RESEARCH REPORT 6 Anders Rønn-Nielsen, Jon Sporring and Eva B. Vedel Jensen Estimation of sample spacing in stochastic processes No. 7,

More information

Estimation of the quantile function using Bernstein-Durrmeyer polynomials

Estimation of the quantile function using Bernstein-Durrmeyer polynomials Estimation of the quantile function using Bernstein-Durrmeyer polynomials Andrey Pepelyshev Ansgar Steland Ewaryst Rafaj lowic The work is supported by the German Federal Ministry of the Environment, Nature

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Time Series and Forecasting Lecture 4 NonLinear Time Series

Time Series and Forecasting Lecture 4 NonLinear Time Series Time Series and Forecasting Lecture 4 NonLinear Time Series Bruce E. Hansen Summer School in Economics and Econometrics University of Crete July 23-27, 2012 Bruce Hansen (University of Wisconsin) Foundations

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

The Hilbert Transform and Fine Continuity

The Hilbert Transform and Fine Continuity Irish Math. Soc. Bulletin 58 (2006), 8 9 8 The Hilbert Transform and Fine Continuity J. B. TWOMEY Abstract. It is shown that the Hilbert transform of a function having bounded variation in a finite interval

More information

Simulation of conditional diffusions via forward-reverse stochastic representations

Simulation of conditional diffusions via forward-reverse stochastic representations Weierstrass Institute for Applied Analysis and Stochastics Simulation of conditional diffusions via forward-reverse stochastic representations Christian Bayer and John Schoenmakers Numerical methods for

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Actuarial models. Proof. We know that. which is. Furthermore, S X (x) = Edward Furman Risk theory / 72

Actuarial models. Proof. We know that. which is. Furthermore, S X (x) = Edward Furman Risk theory / 72 Proof. We know that which is S X Λ (x λ) = exp S X Λ (x λ) = exp Furthermore, S X (x) = { x } h X Λ (t λ)dt, 0 { x } λ a(t)dt = exp { λa(x)}. 0 Edward Furman Risk theory 4280 21 / 72 Proof. We know that

More information

BLOWUP THEORY FOR THE CRITICAL NONLINEAR SCHRÖDINGER EQUATIONS REVISITED

BLOWUP THEORY FOR THE CRITICAL NONLINEAR SCHRÖDINGER EQUATIONS REVISITED BLOWUP THEORY FOR THE CRITICAL NONLINEAR SCHRÖDINGER EQUATIONS REVISITED TAOUFIK HMIDI AND SAHBI KERAANI Abstract. In this note we prove a refined version of compactness lemma adapted to the blowup analysis

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family

More information

arxiv: v1 [math.pr] 24 Aug 2018

arxiv: v1 [math.pr] 24 Aug 2018 On a class of norms generated by nonnegative integrable distributions arxiv:180808200v1 [mathpr] 24 Aug 2018 Michael Falk a and Gilles Stupfler b a Institute of Mathematics, University of Würzburg, Würzburg,

More information

Asymptotic results for empirical measures of weighted sums of independent random variables

Asymptotic results for empirical measures of weighted sums of independent random variables Asymptotic results for empirical measures of weighted sums of independent random variables B. Bercu and W. Bryc University Bordeaux 1, France Workshop on Limit Theorems, University Paris 1 Paris, January

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

Problem set 2 The central limit theorem.

Problem set 2 The central limit theorem. Problem set 2 The central limit theorem. Math 22a September 6, 204 Due Sept. 23 The purpose of this problem set is to walk through the proof of the central limit theorem of probability theory. Roughly

More information

Small ball probabilities and metric entropy

Small ball probabilities and metric entropy Small ball probabilities and metric entropy Frank Aurzada, TU Berlin Sydney, February 2012 MCQMC Outline 1 Small ball probabilities vs. metric entropy 2 Connection to other questions 3 Recent results for

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Concentration inequalities and the entropy method

Concentration inequalities and the entropy method Concentration inequalities and the entropy method Gábor Lugosi ICREA and Pompeu Fabra University Barcelona what is concentration? We are interested in bounding random fluctuations of functions of many

More information

Differentiating Series of Functions

Differentiating Series of Functions Differentiating Series of Functions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University October 30, 017 Outline 1 Differentiating Series Differentiating

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

1 Weak Convergence in R k

1 Weak Convergence in R k 1 Weak Convergence in R k Byeong U. Park 1 Let X and X n, n 1, be random vectors taking values in R k. These random vectors are allowed to be defined on different probability spaces. Below, for the simplicity

More information

Stationary particle Systems

Stationary particle Systems Stationary particle Systems Kaspar Stucki (joint work with Ilya Molchanov) University of Bern 5.9 2011 Introduction Poisson process Definition Let Λ be a Radon measure on R p. A Poisson process Π is a

More information

Concentration Inequalities for Density Functionals. Shashank Singh

Concentration Inequalities for Density Functionals. Shashank Singh Concentration Inequalities for Density Functionals by Shashank Singh Submitted to the Department of Mathematical Sciences in partial fulfillment of the requirements for the degree of Master of Science

More information

1 Fourier Integrals of finite measures.

1 Fourier Integrals of finite measures. 18.103 Fall 2013 1 Fourier Integrals of finite measures. Denote the space of finite, positive, measures on by M + () = {µ : µ is a positive measure on ; µ() < } Proposition 1 For µ M + (), we define the

More information

Quantitative Oppenheim theorem. Eigenvalue spacing for rectangular billiards. Pair correlation for higher degree diagonal forms

Quantitative Oppenheim theorem. Eigenvalue spacing for rectangular billiards. Pair correlation for higher degree diagonal forms QUANTITATIVE DISTRIBUTIONAL ASPECTS OF GENERIC DIAGONAL FORMS Quantitative Oppenheim theorem Eigenvalue spacing for rectangular billiards Pair correlation for higher degree diagonal forms EFFECTIVE ESTIMATES

More information

Understanding Regressions with Observations Collected at High Frequency over Long Span

Understanding Regressions with Observations Collected at High Frequency over Long Span Understanding Regressions with Observations Collected at High Frequency over Long Span Yoosoon Chang Department of Economics, Indiana University Joon Y. Park Department of Economics, Indiana University

More information

THE DVORETZKY KIEFER WOLFOWITZ INEQUALITY WITH SHARP CONSTANT: MASSART S 1990 PROOF SEMINAR, SEPT. 28, R. M. Dudley

THE DVORETZKY KIEFER WOLFOWITZ INEQUALITY WITH SHARP CONSTANT: MASSART S 1990 PROOF SEMINAR, SEPT. 28, R. M. Dudley THE DVORETZKY KIEFER WOLFOWITZ INEQUALITY WITH SHARP CONSTANT: MASSART S 1990 PROOF SEMINAR, SEPT. 28, 2011 R. M. Dudley 1 A. Dvoretzky, J. Kiefer, and J. Wolfowitz 1956 proved the Dvoretzky Kiefer Wolfowitz

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang. Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)

More information

Estimation of the Bivariate and Marginal Distributions with Censored Data

Estimation of the Bivariate and Marginal Distributions with Censored Data Estimation of the Bivariate and Marginal Distributions with Censored Data Michael Akritas and Ingrid Van Keilegom Penn State University and Eindhoven University of Technology May 22, 2 Abstract Two new

More information

arxiv: v1 [stat.me] 17 Jan 2008

arxiv: v1 [stat.me] 17 Jan 2008 Some thoughts on the asymptotics of the deconvolution kernel density estimator arxiv:0801.2600v1 [stat.me] 17 Jan 08 Bert van Es Korteweg-de Vries Instituut voor Wiskunde Universiteit van Amsterdam Plantage

More information

1. Aufgabenblatt zur Vorlesung Probability Theory

1. Aufgabenblatt zur Vorlesung Probability Theory 24.10.17 1. Aufgabenblatt zur Vorlesung By (Ω, A, P ) we always enote the unerlying probability space, unless state otherwise. 1. Let r > 0, an efine f(x) = 1 [0, [ (x) exp( r x), x R. a) Show that p f

More information

The Lindeberg central limit theorem

The Lindeberg central limit theorem The Lindeberg central limit theorem Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto May 29, 205 Convergence in distribution We denote by P d the collection of Borel probability

More information

Hilbert Spaces. Contents

Hilbert Spaces. Contents Hilbert Spaces Contents 1 Introducing Hilbert Spaces 1 1.1 Basic definitions........................... 1 1.2 Results about norms and inner products.............. 3 1.3 Banach and Hilbert spaces......................

More information

Statistical Inverse Problems and Instrumental Variables

Statistical Inverse Problems and Instrumental Variables Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology

More information

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 The Annals of Statistics 1997, Vol. 25, No. 6, 2512 2546 OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 By O. V. Lepski and V. G. Spokoiny Humboldt University and Weierstrass Institute

More information

On the Breiman conjecture

On the Breiman conjecture Péter Kevei 1 David Mason 2 1 TU Munich 2 University of Delaware 12th GPSD Bochum Outline Introduction Motivation Earlier results Partial solution Sketch of the proof Subsequential limits Further remarks

More information

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR Introduction a two sample problem Marčenko-Pastur distributions and one-sample problems Random Fisher matrices and two-sample problems Testing cova On corrections of classical multivariate tests for high-dimensional

More information

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example continued : Coin tossing Math 425 Intro to Probability Lecture 37 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan April 8, 2009 Consider a Bernoulli trials process with

More information

SDS : Theoretical Statistics

SDS : Theoretical Statistics SDS 384 11: Theoretical Statistics Lecture 1: Introduction Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin https://psarkar.github.io/teaching Manegerial Stuff

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random Susceptible-Infective-Removed Epidemics and Erdős-Rényi random graphs MSR-Inria Joint Centre October 13, 2015 SIR epidemics: the Reed-Frost model Individuals i [n] when infected, attempt to infect all

More information

Monte Carlo Integration I [RC] Chapter 3

Monte Carlo Integration I [RC] Chapter 3 Aula 3. Monte Carlo Integration I 0 Monte Carlo Integration I [RC] Chapter 3 Anatoli Iambartsev IME-USP Aula 3. Monte Carlo Integration I 1 There is no exact definition of the Monte Carlo methods. In the

More information

Multivariate Least Weighted Squares (MLWS)

Multivariate Least Weighted Squares (MLWS) () Stochastic Modelling in Economics and Finance 2 Supervisor : Prof. RNDr. Jan Ámos Víšek, CSc. Petr Jonáš 12 th March 2012 Contents 1 2 3 4 5 1 1 Introduction 2 3 Proof of consistency (80%) 4 Appendix

More information

On detection of unit roots generalizing the classic Dickey-Fuller approach

On detection of unit roots generalizing the classic Dickey-Fuller approach On detection of unit roots generalizing the classic Dickey-Fuller approach A. Steland Ruhr-Universität Bochum Fakultät für Mathematik Building NA 3/71 D-4478 Bochum, Germany February 18, 25 1 Abstract

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information