The Asymptotic Minimax Constant for Sup-Norm. Loss in Nonparametric Density Estimation

Size: px
Start display at page:

Download "The Asymptotic Minimax Constant for Sup-Norm. Loss in Nonparametric Density Estimation"

Transcription

1 The Asymptotic Minimax Constant or Sup-Norm Loss in Nonparametric Density Estimation ALEXANDER KOROSTELEV 1 and MICHAEL NUSSBAUM 2 1 Department o Mathematics, Wayne State University, Detroit, MI 48202, USA 2 Weierstrass Institute, Mohrenstr. 39, D Berlin, Germany March 1999 Abstract We develop the exact constant o the risk asymptotics in the uniorm norm or density estimation. This constant has irst been ound or nonparametric regression and or signal estimation in Gaussian white noise. Hölder classes or arbitrary smoothness index β > 0 on the unit interval are considered. The constant involves the value o an optimal recovery problem as in the white noise case, but in addition it depends on the maximum o densities in the unction class. Keywords: Density estimation, exact constant, optimal recovery, uniorm norm risk, white noise RUNNING TITLE: Asymptotic minimax density estimation To whom correspondence should be addressed. 1

2 1 Introduction and Main Result Recently in Korostelev (1993) an asymptotically minimax exact constant has been ound or loss in the uniorm norm, or Gaussian nonparametric regression when the parameter set is a Hölder unction class. This risk bound represents an analog o the now classical L 2 -minimax constant o Pinsker (1980) valid or a Sobolev unction class. Donoho (1994) extended Korostelev s (1993) result to signal estimation in Gaussian white noise and showed it to be related to nonstochastic optimal recovery. Here we consider density estimation rom i. i. d. data with a sup-norm loss. Consider a sample X 1,..., X n o i. i. d. observations having a probability density = (x) in the interval 0 x 1. Let β, L be some positive constants, and let Σ(β, L) be the class o densities Σ(β, L) = {g : 1 0 g = 1, g 0, and g β (x 1 ) g β (x 2 ) L x 1 x 2 β β, 0 x 1, x 2 1} where β is the greatest integer strictly less than β. Assume that the density belongs a priori to Σ(β, L). Consider an arbitrary estimator ˆ n = ˆ n (x) measurable w.r.t. the observations X 1,..., X n. We deine the discrepancy o ˆn (x) and the true density (x) by the sup norm ˆ n where = sup (x). 0 x 1 Denote by the probability distribution o the observations X 1,..., X n, and by E (n) the expectation w.r.t.. Let w(u), u 0, be a continuous increasing unction which admits a polynomial majorant w(u) W 0 (1 + u γ ) with some positive constants W 0, γ, and such that w(0) = 0. Introduce the minimax risk r n = r n (w( ); β, L, b) = in ˆ n sup Σ(β,L,b) 2 E (n) w(ψn 1 ˆ n ) (1)

3 where ψ n = ((log n)/n) β/(2β+1) is the optimal rate o convergence (c. Khasminskii (1978), Stone (1982), Ibragimov and Khasminskii (1982)). The main goal o this paper is to ind the exact asymptotics o the risk (1). To do this we need two additional deinitions. First, note that the densities in (β, L) are uniormly bounded, i. e. B = B (β, L) = max Σ(β,L) max (x) < +. (2) 0 x 1 An argument or this based on imbedding theorems, as well as urther inormation on the value o B is given in an appendix (section 4). Secondly, denote by Σ 0 (β, L) an auxiliary class o unctions on the whole real line: Σ 0 (β, L) = { : β (x 1 ) β (x 2 ) L x 1 x 2 β β, x 1, x 2 R 1 }. Let g 2 denote the L 2 -norm o g. Deine the constant { } A β = max g(0) g 2 1, g Σ 0 (β, 1). (3) Theorem. For any β > 0, L > 0 and or any loss unction w(u) the minimax risk (1) satisies: lim r n = w (C) n where ( ) 2B L 1/β β/(2β+1) C = C(β, L, B ) = A β, 2β + 1 and the constants B = B (β, L) and A β are deined by (2) and (3) respectively. The proo o the corresponding upper and lower asymptotic risk bounds is developed in sections 2 and 3. A more concise argument based on asymptotic equivalence o experiments in the Le Cam sense is possible (c. Nussbaum (1996)), but only in the case β > 1/2, and under an additional assumption that the densities are uniormly 3

4 bounded away rom 0. While asymptotic equivalence is known to ail or β 1/2 (c. Brown and Zhang (1998)), our method here yields the sup-norm constant or density estimation or all β > 0. The proo via asymptotic equivalence can be ound in the technical report (Korostelev and Nussbaum (1995)). 2 Upper Asymptotic Bound Let g be a solution o the extremal problem in (3), g Σ 0 (β, 1). The correctness o this deinition ollows rom Micchelli and Rivlin (1977), and, as shown by Leonov (1997), g has a compact support. Consider also the solution g 1 Σ 0 (β, 1) o the dual extremal problem I g is the solution o (3) then g 1 (u) = A 1 β min { g 1 2 g 1 (0) = 1, g 1 Σ 0 (β, 1)}. (4) g(a1/β β u) (c. section 2.2 o Donoho (1994)); hence g 1 2 = A (2β+1)/2β β. Since g is o compact support, so is g 1 ; let S be a constant such that g 1 (u) = 0 or u > S. Put K(u) = g 1 (u)/ g 1, u R 1 and choose the bandwidth h n = (Cψ n /L) 1/β. For an arbitrary small ixed ɛ > 0 deine regular grid points in the interval [0, 1] by x k = ɛkh n, k = 0,..., M, where M = M(n, ɛ) = (ɛh n ) 1 is assumed integer. Put M 0 = [S/ɛ] + 1, and introduce the kernel estimator n at the inner grid points n(x k ) = (nh n ) 1 n K((X i x k )/h n ), k = M 0,..., M M 0. i=1 4

5 Lemma 1. There exists a constant p 0 > 0 such that or any α > 0 the inequality holds sup ( max Σ(β,L) M 0 k M M 0 n(x k ) (x k ) (1 + α)cψ n ) p 0 M α. Proo. Deine the bias and stochastic terms by b nk = E (n) [ n(x k )] (x k ), and z nk = n(x k ) E (n) [ n(x k )]. For any α > 0 the ollowing inequalities are true: M M 0 k=m 0 ( max M 0 k M M 0 ( n(x k ) (x k )) (1 + α)cψ n ) = ( max M 0 k M M 0 (z nk + b nk ) (1 + α)cψ n ) ( max M 0 k M M 0 z nk (1 + α)cψ n (z nk (1 + α)cψ n sup Σ 0 (β,l), (0)=0 M M 0 k=m 0 max b nk ) M 0 k M M 0 h 1 n K(u/h n )(u) du ) (z nk (1 + α)cψ n (1 sup K(u)(u) du )) Σ 0 (β,1), (0)=0 where the standard renormalization technique applies (see Donoho (1994)). Deine K δ (u) = δ 2/(2β+1) g(δ 2/(2β+1) u)/ g or any δ > 0, where g is again the solution o (3). The optimal recovery identity (Micchelli and Rivlin (1977), Donoho (1994)) implies that sup sup Σ 0 (β,1) z 2 1 K δ (u)(u) du (0) + δ 5 K δ (u)z(u)du = δ2β/(2β+1) A β,

6 hence sup Σ 0 (β,1), (0)=0 A choice δ = A (2β+1)/2β β yields K δ (u) = A 1/β β K δ (u)(u) du + δ K δ 2 = δ 2β/(2β+1) A β. g(a1/β β u)/ g = g 1 (u)/ g 1 = K(u) and hence 1 sup K(u)(u) du = A (2β+1)/2β β K 2. Σ 0 (β,1), (0)=0 By urther calculation we obtain nh n /(B K 2 2) Cψ n A (2β+1)/2β 2 β K 2 = ( 2β + 1 and that or or any ɛ < 1 and any n, satisying log n)1/2 log n > (2β + 1)(log ɛ 1 + β 1 log(l/c)) we have 2 ( 2β + 1 log n)1/2 2 log M. Thus, the latter sum o probabilities can be estimated rom above by M M 0 k=m 0 ( nh n /(B K 2 2) z nk (1 + α) 2 log M). Note that where nh n /(B K 22) z nk = n 1/2 n i=1 ξ ik ξ ik = h n /(B K 2 2) (h 1 n K((X i x k )/h n ) E (n) [h 1 n K((X i x k )/h n )]), i = 1,..., n, k = M 0,..., M M 0. 6

7 The random variables ξ ik, i = 1,..., n, are independent or any ixed k, and E (n) [ξ ik ] = 0, V ar (n) [ξ ik ] = B 1 (x k ) + o n (1) 1 + o n (1) (5) where o n (1) 0 as n uniormly in i, k, and Σ(β, L). Moreover, or any integer m, m 3, the ollowing bounds hold: E (n) ξ ik m (h n /(B K 2 2)) m/2 (2B S 2 m H m /h m 1 n ) = 2B Sh n (λ/ h n ) m (6) where H = max u R 1 K(u) and λ = 2H / B K 2 2. The Chebyshev exponential inequality, known as Cherno s upper bound, yields ( max M 0 k M M 0 ( n(x k ) (x k )) (1 + α)cψ n ) M M 0 k=m 0 (n 1/2 n ξ in (1 + α) 2 log M) i=1 M exp ( c(1 + α) 2 log M)(E (n) [exp(cξ in / n)]) n. Using (5) and (6), we can estimate the moment generating unction as ollows: E (n) [exp(cξ in / n)] 1 + c2 2n V ar(n) [ξ ik ] + 1 m! ( c ) m E (n) m 3 n ξ ik m 1 + c2 2n (1 + o n(1)) + 2B Sλ 3 c 3 n 1 nh n m! ( λc ) m 3 nhn m c2 2n (1 + o n(1) + 4B Sλ 3 c nhn exp(λc/ nh n )) exp ( c2 2n (1 + o n(1))). (7) 7

8 The latter inequality is true or any c = o n ( nh n ) as n. I we choose c = 2 log M, then (7) implies that ( max M 0 k M M 0 ( n(x k ) (x k )) (1 + α)cψ n ) M exp ( 2(1 + α) log M) exp ((1 + o n (1)) log M) M exp ( (1 + α) log M) = M α or any n large enough. The probability o the random event { min M 0 k M M 0 ( n(x k ) (x k )) (1 + α)cψ n } admits the same upper bound, and this proves the lemma. To extend the deinition o n(x k ) to the grid points x k which are close to the endpoints o the interval [0,1], we take a kernel K 0 (u) with the support [0,1] satisying the orthogonality conditions 1 0 K 0 = 1, and 1 0 u j K 0 = 0, j = 1,..., β. Put n n(x k ) = (nκh n ) 1 K 0 ((X i x k )/(κh n )), k = 0,..., M 0 1 (8) i=1 where a small positive constant κ is chosen in Lemma 2 below. For the grid points x k [1 Sh n, 1] we deine n(x k ) = (nκh n ) 1 n K 0 ((x k X i )/(κh n )), k = M M 0 + 1,..., M. i=1 Put M = {0,..., M 0 1} {M M 0 + 1,..., M}. 8

9 Lemma 2. There exist constants p 0 and p 1 such that or any n and or any α > 0 the inequality holds sup Σ(β,L) ( max n(x k ) (x k ) (1 + α)cψ n ) p 0 M αp 1. (9) k M Proo. To prove (9), it suices to derive the upper bound or the probability ( max ( 0 k<m 0 n(x k ) (x k )) (1 + α)cψ n ) p 0 M αp 1. (10) The bias b nk o the estimator (8) at any point x k is O((κh n ) β ) as n (see Devroye and Györi (1985)). Choose κ so small that b nk Cψ n /2, k = 0,..., M 0 1. Taking into account our choice o κ, and ollowing the lines o the proo o Lemma 1, or all n large enough we have the inequalities ( max ( 0 k<m 0 n(x k ) (x k )) (1 + α)cψ n ) where M 0 1 k=0 M 0 1 k=0 M 0 1 k=0 (z nk (1 + α)cψ n Cψ n /2) ( nψ 1/β n (n 1/2 z nk (1/2 + α)c log n) n ξ ik (1/2 + α) log M) i=1 ξ ik = ψn 1/β /C 2 (2β + 1)( 1 K 0 ( X i x k ) E (n) κh n κh n [ 1 K 0 ( X i x k )]). κh n κh n Similarly to (7) we obtain the inequality E (n) [exp(cξ in/ n)] exp ( c2 2n V ar(n) [ξ in](1 + o n (1))) 9

10 with the only dierence that the variance V ar (n) [ξ in] σ0 2 is bounded by some constant σ which is not necessarily 1, as in (7). Note that M 0 is independent o n. Applying Chebyshev s exponential inequality, we have that uniormly in Σ(β, L) ( max ( 0 k<m 0 n(x k ) (x k )) (1 + α)cψ n ) M 0 exp ( c(1/2 + α) log M) exp ( c2 σ (1 + o n(1))). Under the choice c = log M/σ 2 0, the latter ormula yields the upper bound M 0 exp ( α log M) M 2σ0 2 0 M α/(2σ2 0 ). This completes the proo o (10), and the lemma ollows. The derivatives (m) (x), m = 1,..., β, o a density Σ(β, L), can be estimated in the sup-norm with the minimax rate O(h β m n ) as n. We need the ollowing version o the upper bound. Lemma 3. For any m, m = 1,..., β, there exists an estimator (m) n and positive constants p 0, p 1, and C 1 such that or any n and or any α > 0 the inequality holds sup ( max Σ(β,L) 0 k M (m) n (x k ) (m) (x k ) (1 + α)c 1 h β m n ) p 0 M α p 1. Proo. Note that the upper bound in this lemma is crude since C 1 is not necessarily optimal. Choose the kernel K 0 (u) as in Lemma 2, i.e. K 0 (u) has support in [0, 1] and satisies the orthogonality conditions. Assume that K 0 has β + 1 continuous derivatives. For a ixed m, m β, put and (m) n (x k ) = ( 1)m h 1+m n (m) n (x k ) = 1 h 1+m n n i=1 n i=1 K (m) 0 ( X i x k h n ) i 0 x k 1/2 K (m) 0 ( x k X i h n ) i 1/2 < x k 1 10

11 where K (m) 0 is the mth derivative o K 0. Standard arguments show that at each point the bias term is bounded rom above by C 2 h β m n in Σ(β, L) and x k [0, 1]. Take C 1 > 2C 2. Then with a positive constant C 2 uniormly where z (m) nk = (m) n ( max 0 k M (m) n (x k ) (m) (x k ) (1 + α)c 1 h β m n ) ( max 0 k M z(m) nk (1/2 + α)c 1h β m n ) (x k ) E (n) [ n (m) (x k )] are zero-mean random variables. Following the lines o the proo o Lemma 2, we ind that or all n large enough the latter probability is bounded rom above by 2M exp( c(1/2 + α) C 1 log M) exp( c 2 σ 2 m 2 ) with an arbitrary positive c and a constant σ 2 m > 0 independent o n. Choose C 1 > 8σ 2 m, and put c = (1/2 + α)c 1 log M/σ 2 m. Direct calculations show that the latter bound turns into 2M exp( 1 (1/2 + α) 2 C 2 2σm 2 1 log M) 2M 1 4(1/2+α)2 2M 4α which proves the lemma. Proo o Theorem: upper risk bound. Take the estimators n and (m) n as in Lemmas 1-3. For any x [x k, x k+1 ) continue n as the polynomial approximation n(x) = n(x k ) + β m=1 Uniormly in Σ(β, L) we have the inequality 1 m! (m) n (x k )(x x k ) m, x k x < x k+1, k = 0,..., M 1. n L(εh n ) β / β! + max n(x k ) (x k ) + 0 k M β m=1 1 m! (εh n) m (m) max n (x k ) (m) (x k ) 0 k M 11

12 where the irst term in the right-hand side appears rom the Taylor expansion o the density unctions Σ(β, L). When the complementary events to those in Lemmas 1-3 hold, then n (1 + α)(c + C 2 ε)ψ n with a positive constant C 2 independent o n, α and ε. Applying Lemmas 1-3, we have sup ( n (1 + α)(c + C 2 ε)ψ n ) p 2 M αp 3 (11) Σ(β,L) where p 2 = (1 + β )p 0 and p 3 = min[1; p 1 ]. Take an arbitrary small α 0, and put α j = jα 0, u j = (C + C 2 ε)(1 + α j ), j = 1, 2,.... Finally, or any continuous loss unctions w(u) with the polynomial majorant, we obtain rom (11) that sup Σ(β,L) E (n) w(ψn 1 n ) w((1 + α 0 )(C + C 2 ε)) + W 0 (1 + u γ j+1 ) p 2M jα 0p 3. Since the latter sum is vanishing as n, and α 0, ε are arbitrary small, the upper bound ollows. 3 Lower Asymptotic Bound We irst ormulate a lemma in a general ramework. For each j = 1,..., M let Q j,ϑ, ϑ [ 1, 1] be a dominated amily o distributions on some measurable space (X j, F j ). Let R = [ 1, 1] M, θ R and let Q θ = M Q j,θj, θ R be the amily o product measures indexed by θ = (θ 1,..., θ M ). Deine θ M = max 1 j M θ j. Lemma 4. Let π j be discrete prior distributions with inite support on [ 1, 1], and consider the Bayes risks r j,t (π j ) = in ˆϑ j [ 1,1] ( Q j,ϑ ˆϑj ϑ > T 12 ) π j (dϑ), j = 1,..., M (12)

13 where the inimum is taken over nonrandomized estimators ˆϑ j o ϑ depending only on data rom X j. Let ˆθ denote nonrandomized estimators o θ depending on the whole data vector x = (x j ),...,M, x j X j, let π = M π j and consider the Bayes risk r T (π) = in ˆθ ( ) M Q θ ˆθ θ > T π(dθ). Then or any T > 0 M r T (π) = 1 (1 r j,t (π j )). Proo. The j-th Bayes risk r j,t (π j ) with data x j rom X j can be ound as ollows. Let Q j,xj be the posterior distribution or ϑ and Q j be the marginal distribution or x j ; then [ 1,1] ( ) Q j,ϑ ˆϑj ϑ > T π j (dϑ) = 1 g j,t (x j, ˆϑ j (x j ))Q j (dx j ) where g j,t is the posterior gain g j,t (x j, t) = Q j,xj ( t ϑ T ). I S j is the inite support o π j then Q j,xj is concentrated on S j [ 1, 1]. For any t [ 1, 1] we have g j,t (x j, t) = Q j,xj ({ϑ}). ϑ S j : t ϑ T This unction o t takes only initely many values, and a maximum in t is attained on some closed interval t [t min (x j ), t max (x j )]. For uniqueness, take ˆϑ j(x j ) = t max (x j ) as a Bayes estimator. We then have max t [ 1,1] g j,t (x j, t) = g j,t (x j, ˆϑ j(x j )), (13) r j,t (π j ) = 1 g j,t (x j, ˆϑ j(x j ))Q j (dx j ). (14) 13

14 Consider now the global problem: we have r T (π) = in ˆθ ( = in 1 ˆθ = 1 sup ˆθ ( ) M Q θ ˆθ θ > T π(dθ) ) ( M χ [ T,T ] (ˆθ j θ j ) g T (x, ˆθ(x)) Q θ (dx) ) π(dθ) M Q j (dx j ) (15) where g T (x, u) is the posterior gain (or u = (u j ),...,M ): Then (13) implies M M g T (x, u) = Q j,xj ( u j ϑ T ) = g j,t (x j, u j ). max g(x, u) = M u R Thus a Bayes estimator o θ is max t [ 1,1] g j,t (x j, t) = M g j,t (x j, ˆϑ j(x j )). ˆθ (x) = (ˆϑ j(x j )),...,M, and rom (15) and (14) we obtain r T (π) = 1 g T (x, ˆθ M (x)) Q j (dx j ) M = 1 g j,t (x j, ˆϑ j(x j ))Q j (dx j ) M = 1 (1 r j,t (π j )). Back in our density problem, take a small value ɛ = ɛ(α) (0, 1); the inal choice o ɛ will be made below. Let Σ(β, L) be such that β (x) is constant in an interval x [t 1, t 2 ], t 2 t 1 ɛ, and (x) B /(1 + ɛ) or x [t 1, t 2 ] (c. lemma A. 3 below). 14

15 Set 0 = (t 1 ); then 0 B /(1 + ɛ). Consider again the solution g 1 Σ 0 (β, 1) o the extremal problem (4); recall g 1 2 = A (2β+1)/2β β and that S is such such that g 1 (u) = 0 or u > S. Deine g ɛ (u) = g 1 (u S) ɛg 1 (ɛ(u 2S(1 + ɛ 1 ))), u R. As is easily seen, g ɛ = 0, g 2 ɛ = (1 + ɛ) g and g ɛ Σ 0 (β, 1) or ɛ suiciently small. Set l n = h n 2S(1 + 1/ɛ) and redeine M = M(n, ɛ) rom section 2 as M = [n 1/((2β+1) (1+ɛ)) ]. Introduce a amily o unctions (x; θ) = (x; θ 1,..., θ M ) = (x) + Lh β n M θ j g ɛ (h 1 n (x a j )), 0 x 1, (16) where a 1 = t 1, a j+1 a j = l n, j = 1,..., M, θ = (θ 1,..., θ M ) R = [ 1, 1] M. The density (x; θ) diers rom (x) only in the interval [t 1, t 1 + Ml n ] [t 1, t 2 ] or n large since Mh n 0 as n or any ixed ɛ. Since β is constant on x [t 1, t 2 ], we obtain that or ɛ small enough and n suiciently large (x; θ) Σ(β, L) or θ R. Put or shortness ( ;θ) = θ and E (n) ( ;θ) = E(n) θ. Deine intervals J j = [a j, a j + l n ), j = 1,..., M, and let P j,θj be the conditional distribution o X 1 given that X 1 J j when θ obtains. Let κ(, ) be the Kullback-Leibler inormation number: or laws P 1, P 2 such that P 1 P 2 κ(p 1, P 2 ) = log dp 1 dp 2 dp 1. Consider also κ 2 2(P 1, P 2 ) = κ (P 1, P 2 ) = es sup P 1 ( log dp ) 2 1 dp 1, dp 2 log dp 1 dp 2. 15

16 Lemma 5. Let ϑ [0, 1] and consider the quantities κ = κ(p 1, P 2 ), κ 2 = κ 2 (P 1, P 2 ) and κ = κ (P 1, P 2 ) or measures P 1 = P j,ϑ, P 2 = P j, ϑ and j = 1,..., M. Set µ = 2(1 + ɛ) 2 /(2β + 1), n 0 = nl n 0. (17) Then uniormly over j = 1,..., M, as n (i) κ = 2ϑ 2 µ 0 n 1 0 log n (1 + o(1)) or some positive constant µ 0 = µ 0 (β, L, ɛ), µ 0 µ (ii) κ 2 2 = 2κ(1 + o(1)) (iv) κ 2 = O(n 1 0 log n). Proo. Deine The distribution P j,ϑ has density η j = ln 1 (x)dx. J j j (x; ϑ) = ( (x) + ϑlh β ng ɛ (h 1 n (x a j )))/l n η j, x J j. Observe that (x) = 0 + o(1) and η j = 0 + o(1) uniormly in j and x. In the sequel we use notation o (1), O (1) or quantities which are o(1) or O(1) as n uniormly over x J j and j = 1,..., M. Recall 0 B /(1 + ɛ). Deine urther z j (x) = Lh β ng ɛ (h 1 n (x a j ))/ (x); we then obtain j (x; ϑ) = l 1 n (1 + ϑz j (x))(1 + o (1)), x J j. (18) Now g ɛ = 0 entails z j (x) (x)dx = 0 and as a consequence ( z j (x) j (x; ϑ)dx = ϑln 1 ) zj 2 (x)dx (1 + o (1)). (19) 16

17 Note the ollowing relation: or 0 < z 0 Note also log 1 + z 1 z = 2z + O(z3 ). (20) z 2 j (x) = O (h 2β n ) (21) and the ollowing equalities o order o magnitude (denoted ), which are immediate consequences o our deinitions: h 2β n (log n/n) 2β/(2β+1) n 1 0 log n. (22) Proo o (i). We have κ = log 1 + ϑz j(x) 1 ϑz j (x) j(x; ϑ)dx; consequently, in view o (19) and (20) κ = 2ϑ = 2ϑ 2 l 1 n z j (x) j (x; ϑ)dx + O( z j (x) 3 ) ( ) zj 2 (x)dx (1 + o (1)) + O ((n 1 0 log n) 3/2 ). (23) Note that ln 1 zj 2 (x)dx = ln L 2 h (2β+1) n (1 + ɛ) g (1 + o (1)). Recall g = A (2β+1)/β β ; an evaluation o the r. h. s. above yields ln 1 z 2 j (x)dx = (B / 0 (1 + ɛ))µn 1 0 log n(1 + o (1)). (24) Set µ 0 = (B / 0 (1 + ɛ))µ; then µ 0 depends on ɛ, β, B = B (β, L) and 0 = (t 1 ), and the unction can be selected to depend only on β and L (c. Lemma A.3). The inequality 0 B /(1 + ɛ) now completes the proo o (i). 17

18 Proo o (ii). We have ( κ = log 1 + ϑz ) 2 j(x) j (x; ϑ)dx 1 ϑz j (x) (2ϑzj = (x) + O ((n 1 0 log n) 3/2 ) ) 2 j (x; ϑ)dx ( ) = 4ϑ 2 zj 2 (x) j (x; ϑ)dx + O ((n 1 0 log n) 2 ) ( ) = 4ϑ 2 ln 1 zj 2 (x)dx (1 + o (1)) + O ((n 1 0 log n) 3/2 ) so that (ii) ollows rom (23) and (24). Proo o (iii). This is an immediate consequence o (18), (21) and (22). Let us state a result on large deviations or sums o i. i. d. random variables. Let Z, Z 1, Z 2,... be a sequence o independent real random variables with common law Q. Lemma 6. Assume (i) E Q Z = 0, Var Q Z = 1 (ii) there exists a positive constant C such that Z C Q a.s. Let x n be a sequence such that x n, x n = o(n 1/2 ). Then or every δ > 0 we have n Pr Q (n 1/2 Z i > x n ) exp ( x 2 n(1 + δ)/2 ) (1 + o(1)), n i=1 uniormly over all Q ulilling (i) and (ii) or a given constant C. Proo. For the moment generating unction o Z we have an expansion E exp(tz) = 1 + t 2 /2 + φ with a remainder term satisying φ t 3 C 3 e C /3! uniormly over the class o distributions ulilling (i) and (ii). Hence uniormly over Q the ollowing lower bound holds: 18

19 lim n (x 2 n ) log Pr Q ((Z Z n )/(x n n) > 1) ( 1/2) (see Wentzell (1990), Theorem 4.4.1, or Freidlin and Wentzell (1984), Section 5.1, Example 4.) Thus, or all n large uniormly over Q satisying (i), (ii) we have log Pr Q ((Z Z n )/ n > x n ) ( 1/2 δ)x 2 n and the lemma ollows. For measures P 1, P 2 and P 0 = P 1 + P 2 let Π(P 1, P 2 ) be the testing ainity between P 1 and P 2 Π(P 1, P 2 ) = min(dp 1 /dp 0, dp 2 /dp 0 )dp 0. Let ν be natural and consider the ν-old product measure P ν j,ϑ o P j,ϑ with itsel, or ixed ϑ [0, 1] and or ϑ, and j = 1,..., M. Lemma 7. Let ϑ [0, 1] assume that n 0 (1 ɛ) ν n 0 (1 + ɛ). Then i ɛ is suiciently small, Π(P ν j,ϑ, P ν j, ϑ ) 2n ϑ2 µ (1 + o(1)) uniormly over j = 1,..., M, where µ = (1 + ɛ) 6 /(2β + 1). Proo. It is well known that i P 1 P 2 and P 2 P 1 then Π(P 1, P 2 ) = P 1 (dp 2 /dp 1 1) + P 2 (dp 1 /dp 2 > 1). 19

20 Set P 1 = P ν j,ϑ, P 2 = P ν j, ϑ and consider i. i. d. random variables λ 1,..., λ ν, having the law o λ = log (dp j, ϑ /dp j,ϑ ) under P j,ϑ. Then Note that P 1 (dp 2 /dp 1 1) = P ν j,ϑ ( ν ) λ i 0. (25) i=1 Eλ = κ(p j,ϑ, P j, ϑ ), Varλ = κ 2 2(P j,ϑ, P j, ϑ ) κ 2 (P j,ϑ, P j, ϑ ) = 2κ(P j,ϑ, P j, ϑ )(1 + o (1)) according to lemma 5. Set λ i = (λ i Eλ)/(Varλ) 1/2, i = 1,..., ν; then (25) takes the orm P 1 (dp 2 /dp 1 1) = P ν j,ϑ ( ν 1/2 ) ν λ i ν 1/2 Eλ/(Varλ) 1/2. We use lemma 6 or a lower bound to this large deviation probability. i=1 Note that Varλ 1 = 1, and λ 1 = λ Eλ /(Varλ) 1/2 (κ 2 2(P j,ϑ, P j, ϑ )) 1/2 2κ (P j,ϑ, P j, ϑ ) which according to lemma 5 is uniormly bounded or all suiciently large n. This lemma also yields ν 1/2 Eλ/(Varλ) 1/2 (1 + ɛ) 1/2 n 1/ /2 (κ(p j,ϑ, P j, ϑ )) 1/2 (1 + o (1)) (1 + ɛ)ϑµ 1/2 (log n) 1/2 (26) or suiciently large n. Moreover since (cp. (22)) ν n 0 n 2β/(2β+1) (log n) 1/(2β+1) 20

21 it ollows that the r. h. s. o (26) is o order (logν) 1/2, hence o(ν 1/2 ). Thus lemma 6 is applicable or x n = (1 + ɛ)ϑµ 1/2 (log n) 1/2 : or every δ > 0 ( P 1 (dp 2 /dp 1 1) exp 1 ) 2 x2 n(1 + δ) (1 + o (1)). Selecting δ = ɛ, we obtain we obtain P 1 (dp 2 /dp 1 1) n ϑ2 (1+ɛ) 4 µ/2 (1 + o (1)) = n ϑ2 µ (1 + o (1)). For P 2 (dp 1 /dp 2 1) this lower bound is proved analogously. Deine numbers ν j = n χ Jj (X i ), j = 1,..., M. (27) i=1 The joint distribution o ν = (ν 1,..., ν M ) under θ does not depend θ; call it ν. Lemma 8. For the event { } N n = sup,...,m ν j /n 0 1 < ɛ where n 0 is given by (17), we have ν (N n ) 1. Proo. Note that ν j is a sum o i. i. d. Bernoulli random variables χ Jj (X i ), i = 1,..., n with expectation J j and variance ( J j )(1 J j ). Let n j = n J j. Bennett s inequality (Shorack, Wellner (1986), Appendix A, p. 851) yields or any ɛ > 0 ν ( ν j n j n j ɛ ) exp( ɛ n 1/2 j C ɛ ) (28) or a constant C ɛ. Observe ln 1 J j = 0 + o(1) uniormly in j, hence n j /n 0 1 uniormly. Note also ν j /n 0 1 ν j /n j 1 (n j /n 0 ) + n j /n

22 Select ɛ ɛ/3 and n suiciently large such that n j /n 0 1 < ɛ ; then (28) and M = [n 1/((2β+1) (1+ɛ)) ] imply the assertion. Proo o Theorem: lower risk bound. to the Gaussian case in Korostelev (1993). We omit those details which are similar It suices to prove that or an arbitrary estimator ˆ n and or any small α > 0 lim in n sup Σ(β,L,b) Standard arguments show that this is implied by ( ˆ ) n > (1 α)cψ n = 1. lim in n sup θ R ( θ ˆθ n θ M ) > 1 α = 1 (29) where ˆθ n = (ˆθ n1,..., ˆθ nm ) is an arbitrary estimator o θ = (θ 1,..., θ M ), θ M = max 1 j M θ j. For the intervals J j = [a j, a j + l n ) deine conditional empirical distribution unctions F nj (t) = ν 1 j n χ [aj,a j +tl n )(X i ), t [0, 1], j = 1,..., M, i=1 where ν j are deined in (27). Though the random variables F nj under θ are dependent via the sample X 1,..., X n, they are conditionally independent given the number o sample points in each J j. Thus or sets D 1,..., D M in the appropriate sample space Let j,θ j,ν j ( θ Fn1 D 1,..., F ) nm D M ν 1 = n 1,..., ν M = n M = (30) = M ( ) θ Fnj D j ν j = n j. be the conditional distribution o the process F nj given ν j ; deine also a conditional empirical or the complement o M J j in [0, 1] and let 0,ν its conditional ( ) distribution given ν = (ν 1,..., ν M ). Then θ,ν = M j,θ j,ν j 0,ν represents 22

23 the conditional distribution o the whole sample X 1,..., X n given ν. Recall that ν is the joint θ -distribution o ν, which is is independent o θ R. Put C n = { } ˆθ n θ M > 1 α. Consider a prior distribution π = M π j on R where each π j has inite support in [ 1, 1]. Then in ˆθ n sup θ (C n ) in P θ R ˆθ n R (n) N n In view o lemma 8 it now suices to prove in in ν N n ˆθ n Applying Lemma 4 we obtain in ˆθ n ν (N n ) in ν N n in ˆθ n θ,ν (C n) ν (dν)π(dθ) θ,ν (C n) π(dθ). θ,ν (C n) π(dθ) 1 + o(1). (31) θ,ν (C n) π(dθ) 1 M (1 r j,1 α (π j )) (32) where r j,1 α (π j ) is the Bayes risk (12) or Q j,θj = j,θ j,ν j, T = 1 α. Now let us estimate this Bayes risk in each o the M (conditionally) independent problems, or ν N n. Note that each measure j,θ j,ν j can be construed as coming rom an i. i. d. sample o size ν j governed by the conditional distribution o X 1 given J j ; i. e. by P j,θj. Consider a test o the hypothesis θ j = θ + j = 1 α/2 vs. θ j = θ j = (1 α/2). Let π j be uniorm on {θ + j, θ j }; then we have (c. e. g. Strasser (1985), (4)) Now apply lemma 7, noting that r j,1 α (π j ) 1 (n) Π(P, ). 2 j,θ + j,ν j j,θ j,ν j Π(, j,θ + j,ν j j,θ j,ν j ) = Π(P ν j j,θ + j, P ν j ) j,θ j and that on N n we have n 0 (1 ɛ) ν j n 0 (1 + ɛ). We get r j,1 α (π j ) n (1 α/2)2 µ (33) 23

24 or all j = 1,..., M i n is large enough. Hence or the r. h. s. in (32) we obtain a lower bound 1 M We get Mn (1 α/2)2 µ = (1 + o(1))n µ ( 1 n (1 α/2)2 µ ) ( 1 exp Mn (1 α/2)2 µ ). (34) or an exponent µ = 1/(2β + 1)(1 + ɛ) (1 α/2) 2 µ = 1/(2β + 1)(1 + ɛ) (1 α/2) 2 (1 + ɛ) 6 /(2β + 1). ) For given α > 0, ɛ can be chosen such that µ > 0. In that case exp ( Mn (1 α/2)2 µ 0 and (34) implies (31). 4 Appendix: Analytic Facts The act that densities o the class Σ(β, L) are uniormly bounded in sup-norm ollows rom standard imbedding theorems. Lemma A 1. For any L > 0 and β > 0 B (β, L) = max Σ(β,L) max (x) < +. 0 x 1 Proo. Apply Theorem 17.4 o Besov, Il in and Nikol skii (1979), using the act that is bounded in L 1 -norm on [0, 1]. For β 1 the value o B (β, L) can be ound. Lemma A 2. For any L > 0 and 0 < β 1 B (β, L) = ((β + 1)/β) β/(β+1) L 1/(β+1) i L (β + 1)/β, = 1 + L/(β + 1) i L (β + 1)/β. 24

25 Proo. It can be shown that the extremal density is (x) = max(((0) Lx β ), 0), x [0, 1]. An easy calculation rom (x)dx = 1 yields (0). Lemma A 3. For any L > 0 and β > 0, and every ɛ (0, 1) there are t 1, t 2 [0, 1], 0 < t 2 t 1 ɛ and a unction Σ(β, L) such that or all x [t 1, t 2 ], (x) B (β, L)/(1 + ɛ), (35) β (x) = β (t 1 ). Proo. Let be a solution in o the problem (2), i. e. = B (β, L). Let ɛ (0, ɛ) and let t 1, t 2 [0, 1], t 2 t 1 = ɛ be such that (x) B (β, L)/(1 + ɛ/2) or x [t 1, t 2 ]. Since Σ(β, L) is continuous on [0, 1], such t 1, t 2 [0, 1] exist or suiciently small ɛ. Let m = β, γ = β m and let t 0 [t 1, t 2 ] be such that (m) (t 0 ) (m) (x), or x [t 1, t 2 ]. Since (m) is continuous, such a t 0 exists. Deine a unction g 0 by g 0 (x) = (m) (t 0 ) (m) (x), x [t 1, t 2 ] = (m) (t 0 ) (m) (t 2 ), x (t 2, 1] = (m) (t 0 ) (m) (t 1 ), x [0, t 1 ). Note that g 0 (x) 0, x [0, 1] and g 0 L t 2 t 1 γ = L ɛ γ. Let Q be the integral operator Qg(t) = t 0 g(u)du, t [0, 1] and deine g = Qm g 0 (m-old application o Q). Then g(x) 0, x [0, 1] and g g 0 L ɛ γ. (36) 25

26 Deine = + g. Since (m) (t) = (m) (t 0 ) on [t 1, t 2 ] while (m) (t) (m) (t) is constant outside (t 1, t 2 ), it ollows that (m) (x 1 ) (m) (x 2 ) L x 1 x 2 γ, x 1, x 2 [0, 1]. Furthermore, and by (36) L ɛ γ. Deining = /, we see that is a density in Σ(β, L). Moreover, (x) B (β, L)/(1 + ɛ/2) or x [t1, t 2 ]. By selecting ɛ suiciently small, we achieve (35). Acknowledgements This work was supported by the Deutsche Forschungsgemeinschat, Sonderorschungsbereich 373 Quantiication and Simulation o Economic Processes, Berlin, Germany. The research o the irst author was supported by the International Science Foundation under Grant MCF000. The authors grateully acknowledge the help o Alexander Tsybakov with the concise proo o lemma A.1. Reerences Besov, O. V., Il in, V. P. and Nikol skii, S. M. (1979). Integral Representation o Functions and Imbedding Theorems, Vol. II. V. H. Winston & Sons, Washington, DC / John Wiley, New York. Brown, L. D. and Zhang, C.-H. (1998). Asymptotic nonequivalence o nonparametric experiments when the smoothness index is 1/2. Ann. Statist

27 Devroye, L., Gyori, L. (1985). Nonparametric Density Estimation: the L 1 -view. John Wiley, N. Y. Donoho, D. (1994). Asymptotic minimax risk (or sup norm loss): solution via optimal recovery. Probab. Theor. Rel. Fields Freidlin, M., and Wentzell, A. (1984), Random Perturbation o Dynamical Systems, Springer, New York Ibragimov, I. A., Khasminskii, R. Z. (1982). Bounds or the risks o non-parametric regression estimates. Theory Probab. Appl Khasminskii, R.Z. (1978). A lower bound on the risk o non-parametric estimates o densities in the uniorm metric. Theory Probab. Appl Korostelev, A. P. (1993). Exact asymptotically minimax estimator or nonparametric regression in uniorm norm. Theory Probab. Appl Korostelev, A. P. and Nussbaum, M. (1995). Density estimation in the uniorm norm and white noise approximation. Preprint No. 154, Weierstrass Institute, Berlin. Leonov, S.L (1997). On the solution o an optimal recovery problem and its applications in nonparametric regression. Math. Meth. Stat Micchelli, C. and Rivlin, T. (1977) A survey o optimal recovery. In: C. Micchelli and T. Rivlin (Eds.) Optimal Estimation in Approximation Theory Plenum Press, New York. Nussbaum, M. (1996). Asymptotic equivalence o density estimation and Gaussian white noise. Ann. Statist

28 Pinsker, M. S. (1980). Optimal iltering o square integrable signals in Gaussian white noise. Problems Inorm. Transmission (1980) Shorack, G. and Wellner, J. (1986). Empirical Processes with Applications to Statistics. Wiley, New York Stone, C. (1982). Optimal global rates o convergence or nonparametric regression. Ann. Statist Strasser, H. (1985). Mathematical Theory o Statistics. Walter de Gruyter, Berlin. Wentzell, A., (1990). Theorems on Large Deviations or Markov Stochastic Processes. Kluwer, Dordrecht. 28

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin The Annals of Statistics 1996, Vol. 4, No. 6, 399 430 ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE By Michael Nussbaum Weierstrass Institute, Berlin Signal recovery in Gaussian

More information

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 The Annals of Statistics 1997, Vol. 25, No. 6, 2512 2546 OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 By O. V. Lepski and V. G. Spokoiny Humboldt University and Weierstrass Institute

More information

Minimax lower bounds I

Minimax lower bounds I Minimax lower bounds I Kyoung Hee Kim Sungshin University 1 Preliminaries 2 General strategy 3 Le Cam, 1973 4 Assouad, 1983 5 Appendix Setting Family of probability measures {P θ : θ Θ} on a sigma field

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Asymptotically Efficient Estimation of the Derivative of the Invariant Density

Asymptotically Efficient Estimation of the Derivative of the Invariant Density Statistical Inerence or Stochastic Processes 6: 89 17, 3. 3 Kluwer cademic Publishers. Printed in the Netherlands. 89 symptotically Eicient Estimation o the Derivative o the Invariant Density NK S. DLLYN

More information

Minimax Risk: Pinsker Bound

Minimax Risk: Pinsker Bound Minimax Risk: Pinsker Bound Michael Nussbaum Cornell University From: Encyclopedia of Statistical Sciences, Update Volume (S. Kotz, Ed.), 1999. Wiley, New York. Abstract We give an account of the Pinsker

More information

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University

More information

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model.

Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model. Minimax Rate of Convergence for an Estimator of the Functional Component in a Semiparametric Multivariate Partially Linear Model By Michael Levine Purdue University Technical Report #14-03 Department of

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

1 Relative degree and local normal forms

1 Relative degree and local normal forms THE ZERO DYNAMICS OF A NONLINEAR SYSTEM 1 Relative degree and local normal orms The purpose o this Section is to show how single-input single-output nonlinear systems can be locally given, by means o a

More information

Asymptotic Equivalence of Density Estimation and Gaussian White Noise

Asymptotic Equivalence of Density Estimation and Gaussian White Noise Asymptotic Equivalence of Density Estimation and Gaussian White Noise Michael Nussbaum Weierstrass Institute, Berlin September 1995 Abstract Signal recovery in Gaussian white noise with variance tending

More information

Problem Set. Problems on Unordered Summation. Math 5323, Fall Februray 15, 2001 ANSWERS

Problem Set. Problems on Unordered Summation. Math 5323, Fall Februray 15, 2001 ANSWERS Problem Set Problems on Unordered Summation Math 5323, Fall 2001 Februray 15, 2001 ANSWERS i 1 Unordered Sums o Real Terms In calculus and real analysis, one deines the convergence o an ininite series

More information

Nonparametric estimation using wavelet methods. Dominique Picard. Laboratoire Probabilités et Modèles Aléatoires Université Paris VII

Nonparametric estimation using wavelet methods. Dominique Picard. Laboratoire Probabilités et Modèles Aléatoires Université Paris VII Nonparametric estimation using wavelet methods Dominique Picard Laboratoire Probabilités et Modèles Aléatoires Université Paris VII http ://www.proba.jussieu.fr/mathdoc/preprints/index.html 1 Nonparametric

More information

Telescoping Decomposition Method for Solving First Order Nonlinear Differential Equations

Telescoping Decomposition Method for Solving First Order Nonlinear Differential Equations Telescoping Decomposition Method or Solving First Order Nonlinear Dierential Equations 1 Mohammed Al-Reai 2 Maysem Abu-Dalu 3 Ahmed Al-Rawashdeh Abstract The Telescoping Decomposition Method TDM is a new

More information

STAT 801: Mathematical Statistics. Hypothesis Testing

STAT 801: Mathematical Statistics. Hypothesis Testing STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o

More information

Numerical Solution of Ordinary Differential Equations in Fluctuationlessness Theorem Perspective

Numerical Solution of Ordinary Differential Equations in Fluctuationlessness Theorem Perspective Numerical Solution o Ordinary Dierential Equations in Fluctuationlessness Theorem Perspective NEJLA ALTAY Bahçeşehir University Faculty o Arts and Sciences Beşiktaş, İstanbul TÜRKİYE TURKEY METİN DEMİRALP

More information

Estimation of a quadratic regression functional using the sinc kernel

Estimation of a quadratic regression functional using the sinc kernel Estimation of a quadratic regression functional using the sinc kernel Nicolai Bissantz Hajo Holzmann Institute for Mathematical Stochastics, Georg-August-University Göttingen, Maschmühlenweg 8 10, D-37073

More information

Strong Lyapunov Functions for Systems Satisfying the Conditions of La Salle

Strong Lyapunov Functions for Systems Satisfying the Conditions of La Salle 06 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 49, NO. 6, JUNE 004 Strong Lyapunov Functions or Systems Satisying the Conditions o La Salle Frédéric Mazenc and Dragan Ne sić Abstract We present a construction

More information

Central Limit Theorems and Proofs

Central Limit Theorems and Proofs Math/Stat 394, Winter 0 F.W. Scholz Central Limit Theorems and Proos The ollowin ives a sel-contained treatment o the central limit theorem (CLT). It is based on Lindeber s (9) method. To state the CLT

More information

EXISTENCE OF SOLUTIONS TO SYSTEMS OF EQUATIONS MODELLING COMPRESSIBLE FLUID FLOW

EXISTENCE OF SOLUTIONS TO SYSTEMS OF EQUATIONS MODELLING COMPRESSIBLE FLUID FLOW Electronic Journal o Dierential Equations, Vol. 15 (15, No. 16, pp. 1 8. ISSN: 17-6691. URL: http://ejde.math.txstate.e or http://ejde.math.unt.e tp ejde.math.txstate.e EXISTENCE OF SOLUTIONS TO SYSTEMS

More information

Variance Function Estimation in Multivariate Nonparametric Regression

Variance Function Estimation in Multivariate Nonparametric Regression Variance Function Estimation in Multivariate Nonparametric Regression T. Tony Cai 1, Michael Levine Lie Wang 1 Abstract Variance function estimation in multivariate nonparametric regression is considered

More information

Bayesian Nonparametric Point Estimation Under a Conjugate Prior

Bayesian Nonparametric Point Estimation Under a Conjugate Prior University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 5-15-2002 Bayesian Nonparametric Point Estimation Under a Conjugate Prior Xuefeng Li University of Pennsylvania Linda

More information

RATE EXACT BAYESIAN ADAPTATION WITH MODIFIED BLOCK PRIORS. By Chao Gao and Harrison H. Zhou Yale University

RATE EXACT BAYESIAN ADAPTATION WITH MODIFIED BLOCK PRIORS. By Chao Gao and Harrison H. Zhou Yale University Submitted to the Annals o Statistics RATE EXACT BAYESIAN ADAPTATION WITH MODIFIED BLOCK PRIORS By Chao Gao and Harrison H. Zhou Yale University A novel block prior is proposed or adaptive Bayesian estimation.

More information

Root Arrangements of Hyperbolic Polynomial-like Functions

Root Arrangements of Hyperbolic Polynomial-like Functions Root Arrangements o Hyperbolic Polynomial-like Functions Vladimir Petrov KOSTOV Université de Nice Laboratoire de Mathématiques Parc Valrose 06108 Nice Cedex France kostov@mathunicer Received: March, 005

More information

Rigorous pointwise approximations for invariant densities of non-uniformly expanding maps

Rigorous pointwise approximations for invariant densities of non-uniformly expanding maps Ergod. Th. & Dynam. Sys. 5, 35, 8 44 c Cambridge University Press, 4 doi:.7/etds.3.9 Rigorous pointwise approximations or invariant densities o non-uniormly expanding maps WAEL BAHSOUN, CHRISTOPHER BOSE

More information

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals Acta Applicandae Mathematicae 78: 145 154, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 145 Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals M.

More information

ELEG 3143 Probability & Stochastic Process Ch. 4 Multiple Random Variables

ELEG 3143 Probability & Stochastic Process Ch. 4 Multiple Random Variables Department o Electrical Engineering University o Arkansas ELEG 3143 Probability & Stochastic Process Ch. 4 Multiple Random Variables Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Two discrete random variables

More information

LIKELIHOOD RATIO INEQUALITIES WITH APPLICATIONS TO VARIOUS MIXTURES

LIKELIHOOD RATIO INEQUALITIES WITH APPLICATIONS TO VARIOUS MIXTURES Ann. I. H. Poincaré PR 38, 6 00) 897 906 00 Éditions scientiiques et médicales Elsevier SAS. All rights reserved S046-0030)05-/FLA LIKELIHOOD RATIO INEQUALITIES WITH APPLICATIONS TO VARIOUS MIXTURES Elisabeth

More information

Optimal Estimation of a Nonsmooth Functional

Optimal Estimation of a Nonsmooth Functional Optimal Estimation of a Nonsmooth Functional T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania http://stat.wharton.upenn.edu/ tcai Joint work with Mark Low 1 Question Suppose

More information

Bayesian Regularization

Bayesian Regularization Bayesian Regularization Aad van der Vaart Vrije Universiteit Amsterdam International Congress of Mathematicians Hyderabad, August 2010 Contents Introduction Abstract result Gaussian process priors Co-authors

More information

Bayesian estimation of the discrepancy with misspecified parametric models

Bayesian estimation of the discrepancy with misspecified parametric models Bayesian estimation of the discrepancy with misspecified parametric models Pierpaolo De Blasi University of Torino & Collegio Carlo Alberto Bayesian Nonparametrics workshop ICERM, 17-21 September 2012

More information

DIFFERENTIAL POLYNOMIALS GENERATED BY SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS

DIFFERENTIAL POLYNOMIALS GENERATED BY SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS Journal o Applied Analysis Vol. 14, No. 2 2008, pp. 259 271 DIFFERENTIAL POLYNOMIALS GENERATED BY SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS B. BELAÏDI and A. EL FARISSI Received December 5, 2007 and,

More information

Optimal global rates of convergence for interpolation problems with random design

Optimal global rates of convergence for interpolation problems with random design Optimal global rates of convergence for interpolation problems with random design Michael Kohler 1 and Adam Krzyżak 2, 1 Fachbereich Mathematik, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289

More information

Density estimators for the convolution of discrete and continuous random variables

Density estimators for the convolution of discrete and continuous random variables Density estimators for the convolution of discrete and continuous random variables Ursula U Müller Texas A&M University Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract

More information

Comptes rendus de l Academie bulgare des Sciences, Tome 59, 4, 2006, p POSITIVE DEFINITE RANDOM MATRICES. Evelina Veleva

Comptes rendus de l Academie bulgare des Sciences, Tome 59, 4, 2006, p POSITIVE DEFINITE RANDOM MATRICES. Evelina Veleva Comtes rendus de l Academie bulgare des ciences Tome 59 4 6 353 36 POITIVE DEFINITE RANDOM MATRICE Evelina Veleva Abstract: The aer begins with necessary and suicient conditions or ositive deiniteness

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

Discrete Mathematics. On the number of graphs with a given endomorphism monoid

Discrete Mathematics. On the number of graphs with a given endomorphism monoid Discrete Mathematics 30 00 376 384 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: www.elsevier.com/locate/disc On the number o graphs with a given endomorphism monoid

More information

Fast learning rates for plug-in classifiers under the margin condition

Fast learning rates for plug-in classifiers under the margin condition Fast learning rates for plug-in classifiers under the margin condition Jean-Yves Audibert 1 Alexandre B. Tsybakov 2 1 Certis ParisTech - Ecole des Ponts, France 2 LPMA Université Pierre et Marie Curie,

More information

Goodness-of-fit tests for the cure rate in a mixture cure model

Goodness-of-fit tests for the cure rate in a mixture cure model Biometrika (217), 13, 1, pp. 1 7 Printed in Great Britain Advance Access publication on 31 July 216 Goodness-of-fit tests for the cure rate in a mixture cure model BY U.U. MÜLLER Department of Statistics,

More information

A talk on Oracle inequalities and regularization. by Sara van de Geer

A talk on Oracle inequalities and regularization. by Sara van de Geer A talk on Oracle inequalities and regularization by Sara van de Geer Workshop Regularization in Statistics Banff International Regularization Station September 6-11, 2003 Aim: to compare l 1 and other

More information

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

SPOC: An Innovative Beamforming Method

SPOC: An Innovative Beamforming Method SPOC: An Innovative Beamorming Method Benjamin Shapo General Dynamics Ann Arbor, MI ben.shapo@gd-ais.com Roy Bethel The MITRE Corporation McLean, VA rbethel@mitre.org ABSTRACT The purpose o a radar or

More information

In the index (pages ), reduce all page numbers by 2.

In the index (pages ), reduce all page numbers by 2. Errata or Nilpotence and periodicity in stable homotopy theory (Annals O Mathematics Study No. 28, Princeton University Press, 992) by Douglas C. Ravenel, July 2, 997, edition. Most o these were ound by

More information

BANDELET IMAGE APPROXIMATION AND COMPRESSION

BANDELET IMAGE APPROXIMATION AND COMPRESSION BANDELET IMAGE APPOXIMATION AND COMPESSION E. LE PENNEC AND S. MALLAT Abstract. Finding eicient geometric representations o images is a central issue to improve image compression and noise removal algorithms.

More information

Theorem Let J and f be as in the previous theorem. Then for any w 0 Int(J), f(z) (z w 0 ) n+1

Theorem Let J and f be as in the previous theorem. Then for any w 0 Int(J), f(z) (z w 0 ) n+1 (w) Second, since lim z w z w z w δ. Thus, i r δ, then z w =r (w) z w = (w), there exist δ, M > 0 such that (w) z w M i dz ML({ z w = r}) = M2πr, which tends to 0 as r 0. This shows that g = 2πi(w), which

More information

10. Joint Moments and Joint Characteristic Functions

10. Joint Moments and Joint Characteristic Functions 10. Joint Moments and Joint Characteristic Functions Following section 6, in this section we shall introduce various parameters to compactly represent the inormation contained in the joint p.d. o two r.vs.

More information

Estimation of the functional Weibull-tail coefficient

Estimation of the functional Weibull-tail coefficient 1/ 29 Estimation of the functional Weibull-tail coefficient Stéphane Girard Inria Grenoble Rhône-Alpes & LJK, France http://mistis.inrialpes.fr/people/girard/ June 2016 joint work with Laurent Gardes,

More information

VALUATIVE CRITERIA BRIAN OSSERMAN

VALUATIVE CRITERIA BRIAN OSSERMAN VALUATIVE CRITERIA BRIAN OSSERMAN Intuitively, one can think o separatedness as (a relative version o) uniqueness o limits, and properness as (a relative version o) existence o (unique) limits. It is not

More information

Journal of Mathematical Analysis and Applications

Journal of Mathematical Analysis and Applications J. Math. Anal. Appl. 352 2009) 739 748 Contents lists available at ScienceDirect Journal o Mathematical Analysis Applications www.elsevier.com/locate/jmaa The growth, oscillation ixed points o solutions

More information

Minimax Estimation of Kernel Mean Embeddings

Minimax Estimation of Kernel Mean Embeddings Minimax Estimation of Kernel Mean Embeddings Bharath K. Sriperumbudur Department of Statistics Pennsylvania State University Gatsby Computational Neuroscience Unit May 4, 2016 Collaborators Dr. Ilya Tolstikhin

More information

Supplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values

Supplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values Supplementary material or Continuous-action planning or discounted ininite-horizon nonlinear optimal control with Lipschitz values List o main notations x, X, u, U state, state space, action, action space,

More information

D I S C U S S I O N P A P E R

D I S C U S S I O N P A P E R I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S ( I S B A ) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 2014/06 Adaptive

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

Research Statement. Harrison H. Zhou Cornell University

Research Statement. Harrison H. Zhou Cornell University Research Statement Harrison H. Zhou Cornell University My research interests and contributions are in the areas of model selection, asymptotic decision theory, nonparametric function estimation and machine

More information

SEPARATED AND PROPER MORPHISMS

SEPARATED AND PROPER MORPHISMS SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN The notions o separatedness and properness are the algebraic geometry analogues o the Hausdor condition and compactness in topology. For varieties over the

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim

An idea how to solve some of the problems. diverges the same must hold for the original series. T 1 p T 1 p + 1 p 1 = 1. dt = lim An idea how to solve some of the problems 5.2-2. (a) Does not converge: By multiplying across we get Hence 2k 2k 2 /2 k 2k2 k 2 /2 k 2 /2 2k 2k 2 /2 k. As the series diverges the same must hold for the

More information

10-704: Information Processing and Learning Fall Lecture 24: Dec 7

10-704: Information Processing and Learning Fall Lecture 24: Dec 7 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 24: Dec 7 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/32 Part II 2) Adaptation and oracle inequalities YES, Eurandom, 10 October 2011

More information

A Simple Explanation of the Sobolev Gradient Method

A Simple Explanation of the Sobolev Gradient Method A Simple Explanation o the Sobolev Gradient Method R. J. Renka July 3, 2006 Abstract We have observed that the term Sobolev gradient is used more oten than it is understood. Also, the term is oten used

More information

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction . ETA EVALUATIONS USING WEBER FUNCTIONS Introduction So ar we have seen some o the methods or providing eta evaluations that appear in the literature and we have seen some o the interesting properties

More information

Can we do statistical inference in a non-asymptotic way? 1

Can we do statistical inference in a non-asymptotic way? 1 Can we do statistical inference in a non-asymptotic way? 1 Guang Cheng 2 Statistics@Purdue www.science.purdue.edu/bigdata/ ONR Review Meeting@Duke Oct 11, 2017 1 Acknowledge NSF, ONR and Simons Foundation.

More information

VALUATIVE CRITERIA FOR SEPARATED AND PROPER MORPHISMS

VALUATIVE CRITERIA FOR SEPARATED AND PROPER MORPHISMS VALUATIVE CRITERIA FOR SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN Recall that or prevarieties, we had criteria or being a variety or or being complete in terms o existence and uniqueness o limits, where

More information

Non linear estimation in anisotropic multiindex denoising II

Non linear estimation in anisotropic multiindex denoising II Non linear estimation in anisotropic multiindex denoising II Gérard Kerkyacharian, Oleg Lepski, Dominique Picard Abstract In dimension one, it has long been observed that the minimax rates of convergences

More information

Fluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs

Fluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs Fluctuationlessness Theorem and its Application to Boundary Value Problems o ODEs NEJLA ALTAY İstanbul Technical University Inormatics Institute Maslak, 34469, İstanbul TÜRKİYE TURKEY) nejla@be.itu.edu.tr

More information

On Picard value problem of some difference polynomials

On Picard value problem of some difference polynomials Arab J Math 018 7:7 37 https://doiorg/101007/s40065-017-0189-x Arabian Journal o Mathematics Zinelâabidine Latreuch Benharrat Belaïdi On Picard value problem o some dierence polynomials Received: 4 April

More information

Analysis of the regularity, pointwise completeness and pointwise generacy of descriptor linear electrical circuits

Analysis of the regularity, pointwise completeness and pointwise generacy of descriptor linear electrical circuits Computer Applications in Electrical Engineering Vol. 4 Analysis o the regularity pointwise completeness pointwise generacy o descriptor linear electrical circuits Tadeusz Kaczorek Białystok University

More information

SEPARATED AND PROPER MORPHISMS

SEPARATED AND PROPER MORPHISMS SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN Last quarter, we introduced the closed diagonal condition or a prevariety to be a prevariety, and the universally closed condition or a variety to be complete.

More information

Treatment and analysis of data Applied statistics Lecture 6: Bayesian estimation

Treatment and analysis of data Applied statistics Lecture 6: Bayesian estimation Treatment and analysis o data Applied statistics Lecture 6: Bayesian estimation Topics covered: Bayes' Theorem again Relation to Likelihood Transormation o pd A trivial example Wiener ilter Malmquist bias

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

arxiv: v1 [math.gt] 20 Feb 2008

arxiv: v1 [math.gt] 20 Feb 2008 EQUIVALENCE OF REAL MILNOR S FIBRATIONS FOR QUASI HOMOGENEOUS SINGULARITIES arxiv:0802.2746v1 [math.gt] 20 Feb 2008 ARAÚJO DOS SANTOS, R. Abstract. We are going to use the Euler s vector ields in order

More information

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES

SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Statistica Sinica 19 (2009), 71-81 SMOOTHED BLOCK EMPIRICAL LIKELIHOOD FOR QUANTILES OF WEAKLY DEPENDENT PROCESSES Song Xi Chen 1,2 and Chiu Min Wong 3 1 Iowa State University, 2 Peking University and

More information

arxiv:math.pr/ v1 17 May 2004

arxiv:math.pr/ v1 17 May 2004 Probabilistic Analysis for Randomized Game Tree Evaluation Tämur Ali Khan and Ralph Neininger arxiv:math.pr/0405322 v1 17 May 2004 ABSTRACT: We give a probabilistic analysis for the randomized game tree

More information

Basic mathematics of economic models. 3. Maximization

Basic mathematics of economic models. 3. Maximization John Riley 1 January 16 Basic mathematics o economic models 3 Maimization 31 Single variable maimization 1 3 Multi variable maimization 6 33 Concave unctions 9 34 Maimization with non-negativity constraints

More information

On Convexity of Reachable Sets for Nonlinear Control Systems

On Convexity of Reachable Sets for Nonlinear Control Systems Proceedings o the European Control Conerence 27 Kos, Greece, July 2-5, 27 WeC5.2 On Convexity o Reachable Sets or Nonlinear Control Systems Vadim Azhmyakov, Dietrich Flockerzi and Jörg Raisch Abstract

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

SOME CHARACTERIZATIONS OF HARMONIC CONVEX FUNCTIONS

SOME CHARACTERIZATIONS OF HARMONIC CONVEX FUNCTIONS International Journal o Analysis and Applications ISSN 2291-8639 Volume 15, Number 2 2017, 179-187 DOI: 10.28924/2291-8639-15-2017-179 SOME CHARACTERIZATIONS OF HARMONIC CONVEX FUNCTIONS MUHAMMAD ASLAM

More information

Ruelle Operator for Continuous Potentials and DLR-Gibbs Measures

Ruelle Operator for Continuous Potentials and DLR-Gibbs Measures Ruelle Operator or Continuous Potentials and DLR-Gibbs Measures arxiv:1608.03881v4 [math.ds] 22 Apr 2018 Leandro Cioletti Departamento de Matemática - UnB 70910-900, Brasília, Brazil cioletti@mat.unb.br

More information

Wavelet Shrinkage for Nonequispaced Samples

Wavelet Shrinkage for Nonequispaced Samples University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Wavelet Shrinkage for Nonequispaced Samples T. Tony Cai University of Pennsylvania Lawrence D. Brown University

More information

Review of Prerequisite Skills for Unit # 2 (Derivatives) U2L2: Sec.2.1 The Derivative Function

Review of Prerequisite Skills for Unit # 2 (Derivatives) U2L2: Sec.2.1 The Derivative Function UL1: Review o Prerequisite Skills or Unit # (Derivatives) Working with the properties o exponents Simpliying radical expressions Finding the slopes o parallel and perpendicular lines Simpliying rational

More information

( x) f = where P and Q are polynomials.

( x) f = where P and Q are polynomials. 9.8 Graphing Rational Functions Lets begin with a deinition. Deinition: Rational Function A rational unction is a unction o the orm ( ) ( ) ( ) P where P and Q are polynomials. Q An eample o a simple rational

More information

Partial Averaging of Fuzzy Differential Equations with Maxima

Partial Averaging of Fuzzy Differential Equations with Maxima Advances in Dynamical Systems and Applications ISSN 973-5321, Volume 6, Number 2, pp. 199 27 211 http://campus.mst.edu/adsa Partial Averaging o Fuzzy Dierential Equations with Maxima Olga Kichmarenko and

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

EXPANSIVE ALGEBRAIC ACTIONS OF DISCRETE RESIDUALLY FINITE AMENABLE GROUPS AND THEIR ENTROPY

EXPANSIVE ALGEBRAIC ACTIONS OF DISCRETE RESIDUALLY FINITE AMENABLE GROUPS AND THEIR ENTROPY EXPANSIVE ALGEBRAIC ACTIONS OF DISCRETE RESIDUALLY FINITE AMENABLE GROUPS AND THEIR ENTROPY CHRISTOPHER DENINGER AND KLAUS SCHMIDT Abstract. We prove an entropy ormula or certain expansive actions o a

More information

Definition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series.

Definition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series. 2.4 Local properties o unctions o several variables In this section we will learn how to address three kinds o problems which are o great importance in the ield o applied mathematics: how to obtain the

More information

3.0.1 Multivariate version and tensor product of experiments

3.0.1 Multivariate version and tensor product of experiments ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 3: Minimax risk of GLM and four extensions Lecturer: Yihong Wu Scribe: Ashok Vardhan, Jan 28, 2016 [Ed. Mar 24]

More information

2.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics. References - geostatistics. References geostatistics (cntd.

2.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics. References - geostatistics. References geostatistics (cntd. .6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics Spline interpolation was originally developed or image processing. In GIS, it is mainly used in visualization o spatial

More information

Analytic continuation in several complex variables

Analytic continuation in several complex variables Analytic continuation in several complex variables An M.S. Thesis Submitted, in partial ulillment o the requirements or the award o the degree o Master o Science in the Faculty o Science, by Chandan Biswas

More information

THE DVORETZKY KIEFER WOLFOWITZ INEQUALITY WITH SHARP CONSTANT: MASSART S 1990 PROOF SEMINAR, SEPT. 28, R. M. Dudley

THE DVORETZKY KIEFER WOLFOWITZ INEQUALITY WITH SHARP CONSTANT: MASSART S 1990 PROOF SEMINAR, SEPT. 28, R. M. Dudley THE DVORETZKY KIEFER WOLFOWITZ INEQUALITY WITH SHARP CONSTANT: MASSART S 1990 PROOF SEMINAR, SEPT. 28, 2011 R. M. Dudley 1 A. Dvoretzky, J. Kiefer, and J. Wolfowitz 1956 proved the Dvoretzky Kiefer Wolfowitz

More information

The Poisson summation formula, the sampling theorem, and Dirac combs

The Poisson summation formula, the sampling theorem, and Dirac combs The Poisson summation ormula, the sampling theorem, and Dirac combs Jordan Bell jordanbell@gmailcom Department o Mathematics, University o Toronto April 3, 24 Poisson summation ormula et S be the set o

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Algebraic Information Geometry for Learning Machines with Singularities

Algebraic Information Geometry for Learning Machines with Singularities Algebraic Information Geometry for Learning Machines with Singularities Sumio Watanabe Precision and Intelligence Laboratory Tokyo Institute of Technology 4259 Nagatsuta, Midori-ku, Yokohama, 226-8503

More information

Midterm Exam Solutions February 27, 2009

Midterm Exam Solutions February 27, 2009 Midterm Exam Solutions February 27, 2009 (24. Deine each o the ollowing statements. You may assume that everyone knows the deinition o a topological space and a linear space. (4 a. X is a compact topological

More information

In many diverse fields physical data is collected or analysed as Fourier components.

In many diverse fields physical data is collected or analysed as Fourier components. 1. Fourier Methods In many diverse ields physical data is collected or analysed as Fourier components. In this section we briely discuss the mathematics o Fourier series and Fourier transorms. 1. Fourier

More information

Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses

Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses Ann Inst Stat Math (2009) 61:773 787 DOI 10.1007/s10463-008-0172-6 Generalized Neyman Pearson optimality of empirical likelihood for testing parameter hypotheses Taisuke Otsu Received: 1 June 2007 / Revised:

More information

Outline. Approximate sampling theorem (AST) recall Lecture 1. P. L. Butzer, G. Schmeisser, R. L. Stens

Outline. Approximate sampling theorem (AST) recall Lecture 1. P. L. Butzer, G. Schmeisser, R. L. Stens Outline Basic relations valid or the Bernstein space B and their extensions to unctions rom larger spaces in terms o their distances rom B Part 3: Distance unctional approach o Part applied to undamental

More information

Scattering of Solitons of Modified KdV Equation with Self-consistent Sources

Scattering of Solitons of Modified KdV Equation with Self-consistent Sources Commun. Theor. Phys. Beijing, China 49 8 pp. 89 84 c Chinese Physical Society Vol. 49, No. 4, April 5, 8 Scattering o Solitons o Modiied KdV Equation with Sel-consistent Sources ZHANG Da-Jun and WU Hua

More information