Nonparametric estimation using wavelet methods. Dominique Picard. Laboratoire Probabilités et Modèles Aléatoires Université Paris VII

Size: px
Start display at page:

Download "Nonparametric estimation using wavelet methods. Dominique Picard. Laboratoire Probabilités et Modèles Aléatoires Université Paris VII"

Transcription

1 Nonparametric estimation using wavelet methods Dominique Picard Laboratoire Probabilités et Modèles Aléatoires Université Paris VII http :// 1

2 Nonparametric estimation 2

3 Examples of nonparametrics Estimate a density of probability We observe X 1,...,X n i.i.d. with probability P having a density f w.r. to Lebesgue measure. Our aim is to estimate f. Regression framework We observe (X 1, Y 1 )...,(X n, Y n ) i.i.d. X i Uniform on [0,1], Our aim is to estimate f. Y i = f(x i ) + ǫ i, ǫ i N(0, 1). 3

4 Examples of nonparametrics White noise model dy ǫ t = f(t)dt + ǫdw t, t [0, 1], ǫ = 1/ n. We observe : φ L 2 ([0, 1]), Y φ = 1 0 φ(t)f(t)dt + ǫξ φ, (ξ φ, ξ η ) N 0, φ 2 < φ, η > Our aim is to 0 < φ, η > η 2 estimate f. 4

5 Examples of nonparametrics (more involved) EDS dx t = b(t)dt + f(t)dw t. We observe X i = X i, i = 1,...,n. Our aim is to estimate f. 5

6 Examples of nonparametrics (more involved) Inverse models dy ǫ t = Af(t)dt + ǫdw t, t [0, 1], ǫ = 1/ n. We observe : φ L 2 ([0, 1]), Y φ = 1 0 φ(t)af(t)dt + ǫξ φ, A known linear operator, for instance Af(s) = g(s t)f(t)dt, A Radon transform... Our aim is to estimate f. 6

7 Why is it difficult? Estimate a density of probability We observe X 1,...,X n i.i.d. with probability P having a density f w.r. to Lebesgue measure. Our aim is to estimate f. Easy : Estimate F(x) = P(X i x) : ˆF n (x) = 1 n n i=0 1 ],x] (X i ), ˆF n (x) F(x) = ξ n(x) n {ξ n (x), x R} Loi {B 0 (F(x)), x R} 2 obstructions to differentiation : ˆF n (x) is not differentiable. B 0 (F(x)) is not differentiable either. Kolmogorov Smirnov. 7

8 Parzen kernel method ˆK hn (x) = = 1 h n K( x y 1 nh n n i=0 h n )df n (y) K( x X i h n ) 8

9 density.default(x = z, bw = density.default(x = z, bw = Density Density Density density.default(x = z, bw = density.default(x = z, bw = 0 Density N = 300 Bandwidth = 3 N = 300 Bandwidth = 1 N = 300 Bandwidth = 0.7 N = 300 Bandwidth = 0.4 density.default(x = z, bw = 0 density.default(x = z, bw = 0 density.default(x = z) density.default(x = z) Density Density Density Density N = 300 Bandwidth = 0.3 N = 300 Bandwidth = 0.1 N = 300 Bandwidth = N = 3000 Bandwidth =

10 Minimax framework Our aim is to estimate f V We have a loss function l( ˆf, f) (for instance l( ˆf, f) = ˆf f p p, ˆf f ) f is minimax (exactly) if sup V E f l(f, f) = inf ˆf sup V E f l( ˆf, f). f n is minimax (up to constants) if c inf sup E n fl( ˆf n, f) sup E n fl(fn, f) C inf sup E n fl( ˆf n, f) n. ˆf n V V ˆf n V 10

11 Minimax framework Two steps : Find a rate r(n) 1. Lower bound sup V E n f l( ˆf n, f) c r(n) 2. Upper bound : construct an estimation method f n with sup V E n f l(f n, f) C r(n). 11

12 Lower bound Models : density-1, regression-2, white noise model-3 V = V α (L) = {f : [0, 1] R, sup f(x) f(y) Lδ α, δ, f(0) L} x y δ l( ˆf, f) = ˆf f p p Theorem 1 For 0 α 1, p [1, [, in models 1,2 or 3, inf ˆf n Proof p 155 and more HKPT. sup E n f ˆf n f p p cn pα 1+2α := cr(n) V α (L) 12

13 Lower bound The proof consists in finding a collection Γ of functions (as big as possible) with the following requirements 1. Γ V α (L) 2. f g p p δ, f g Γ 3. d(p n f, Pg n ) 1 2, f g Γ 13

14 Upper bound ˆK hn (x) = ˆK hn (x) = ˆK hn (x) = 1 nh n 1 nh n 1 0 n i=0 n i=0 K( x X i h n ) density (Rosenblatt 1956) K( x X i h n )Y i regression (NadarayaWatson 1964) 1 K( x t )dyt ǫ h n h n white noise Theorem 2 For 0 α 1, p [1, [, in models 1,2 or 3, K(x)dx = 1, K compactly supported [ M, M], K <. f n = ˆK hn, h n = n 1 1+2α sup E n f fn f p p Cn pα 1+2α = Cr(n) V α (L) 14

15 density.default(x = z, bw = density.default(x = z, bw = Density Density Density density.default(x = z, bw = density.default(x = z, bw = 0 Density N = 300 Bandwidth = 3 N = 300 Bandwidth = 1 N = 300 Bandwidth = 0.7 N = 300 Bandwidth = 0.4 density.default(x = z, bw = 0 density.default(x = z, bw = 0 density.default(x = z) density.default(x = z) Density Density Density Density N = 300 Bandwidth = 0.3 N = 300 Bandwidth = 0.1 N = 300 Bandwidth = N = 3000 Bandwidth =

16 Upper bound p = 2, model (1) E n f( ˆK hn (x) f(x)) 2 = E n f( ˆK hn (x) K hn f(x)) 2 + (K hn f(x) f(x)) 2 Balance bias, variance Variance E n f( ˆK hn (x) K hn f(x)) 2 1 K hn (x) 2 f(x)dx n L K(x) 2 dx nh n L 2M K 2 nh n 16

17 Balance bias, variance Bias K hn f(x) f(x) = K hn (u)[f(x u) f(x)du K hn sup u/h n M 2M K L(2Mh n ) α f(x u) f(x) 17

18 Balance bias, variance E n f( ˆK hn (x) f(x)) 2 = E n f( ˆK hn (x) K hn f(x)) 2 + (K hn f(x) f(x)) 2 L nh n 2M K 2 + 4M 2 K 2 L 2 (2Mh n ) 2α Proof general p (Rosenthal inequality), see HKPT Observe that what is needed is in fact f V = K h f f p Ch α 18

19 Orthogonal series methods f L 2 ([0, 1]), E = {ψ i, i N} orthonormal basis of L 2 ([0, 1], dt), f = θ i ψ i, x i = ψ i dy, i N, General estimator ˆf = i A ˆθ i ψ i. Two choices : A, ˆθ i. 19

20 Orthogonal series methods A = (generally) {0,...,K} ˆθ i = 1 n ˆθ i = 1 n ˆθ i = n ψ i (X i ) i=0 i=0 1 0 n ψ i (X i )Y i ψ i (t)dy ǫ t density regression white noise ˆf K 20

21 Upper bounds If we assume f belongs to a polynomially tail compact domain : For s > 0, fixed, V = {f = θ k ψ k, θk 2 MK 2s, K} k>k 21

22 Upper bounds f V (s, M) = {f = θ k ψ k, θ 2 k M 2 K 2s, K} k>k E n f ˆf K f 2 = k K E n f(ˆθ k θ k ) 2 + k>k θ 2 k (K + 1) 1 n + k>k θ 2 k (K + 1) 1 n + M2 K 2s Optimized for Ks = c[n] 1+2s K 0 = cn (decreasing in s) 1 sup E ˆf K f 2 c n 2s 1+2s. f V (s,m) 22

23 Moreover Minimax lower bound (Pinsker) Hence Lower bounds inf sup E Est f 2 2 c 0 n 2s 1+2s Est V sup E ˆf K s f 2 c n 2s 1+2s f V (s,m) says that ˆf K s is rate optimal over V but Ks = cn 2 1+2s depends on s 23

24 Kernels versus series Easier calculation for series (for proof and computation) Tuning parameters K h 1 gives an interpretation of the bandwidth parameter and the dimension of the problem. Space V depends on the basis, on the numbering in the basis, only allows a L 2 loss function. 24

25 Bases and functional spaces 25

26 Trigonometric basis and Sobolev spaces : L 2 ([0, 1]) of periodic functions ψ 0 = 1 ψ 2k (x) = 2 cos2kπx ψ 2k+1 (x) = 2 sin2kπx Let β N, the following Sobolev space, W(β, L) = {f : [0, 1] R : f β 1 absolut. continuous W per (β, L) = {f W(β, L), periodic} (f β ) 2 (x)dx L 2 } 26

27 Trigonometric basis and Sobolev spaces Let Θ((a j ), Q) = {θ l 2 : j a 2 jθ 2 j Q 2 } We have, W per (β, L) = Θ((a j ), Q) := Θ(β, Q), a j = j β, j even a j = (j 1) β, j odd Q = L π β 27

28 Trigonometric basis and Sobolev spaces Θ(β, Q) = {θ l 2 : j 2β θj 2 Q 2 } j Θ(β, Q) V (β, Q) j K θ 2 j j K[ j K ]2β θ 2 j K 2β j j 2β θ 2 j 28

29 Examples of bases Haar Wavelet basis ψ 1,0 =: 1 [0,1] =: φ ψ ( x) =: 1 [0,1/2[ 1 [1/2,1] ψ j,k (x) =: 2 j/2 ψ(2 j x k), j N, k {0,...,2 j 1} {ψ j,k, j N, k {0,...,2 j 1} O.N. basis of L 2 ([0, 1]) 29

30 y Index

31 Haar Wavelet basis and Besov spaces Let us define φ j,k (x) =: 2 j/2 φ(2 j x k), j N, k {0,...,2 j 1} V j =: {f = α jk φ j,k, α jk R} W j =: {f = k {0,...,2 j 1} k {0,...,2 j 1} β jk ψ j,k, β jk R} 31

32 We have easily, V j = V j 1 W j J f V J f = β jk ψ j,k j= 1 k {0,...,2 j 1} f L 2 ([0, 1]) f = β jk = j= 1 fψ j,k k {0,...,2 j 1}β jk ψ j,k, jk β 2 jk = f 2 2 < 32

33 Wavelet estimators versus kernel Defined as an orthogonal series estimator ˆf J = J ˆβ jk ψ j,k ˆβ jk = 1 n ˆβ jk = 1 n ˆβ jk = j= 1 k {0,...,2 j 1} n ψ j,k (X i ) i=0 i=0 1 0 n ψ j,k (X i )Y i ψ j,k (t)dy ǫ t density regression white noise 33

34 Wavelet estimators versus kernel Defined as a kernel estimator ˆf J (x) = ˆα Jk = 1 n ˆα Jk = 1 n ˆα Jk = k {0,...,2 J 1} n φ J,k (X i ) i=0 i=0 1 0 n φ J,k (X i )Y i φ J,k (t)dy ǫ t ˆα Jk φ J,k (x) density regression white noise 34

35 K J (t, x) = φ J,k (t)φ J,k (x) k {0,...,2 J 1} = 2 J φ(2 J t k)φ(2 J x k) k {0,...,2 J 1} = 2 J K(2 J x, 2 J t), K(s, t) = k φ(t k)φ(x k) ˆf J (x) = 1 n ˆf J (x) = 1 n ˆf(x) = n K J (X i, x) i=0 i=0 1 0 n K J (X i, x)y i K J (t, x)dy ǫ t density regression white noise 35

36 Wavelet bases and Besov spaces In this case the polynomially tail compactness condition writes V (s, M) = {f = j= 1 k {0,...,2 j 1}β jk ψ j,k, j J βjk 2 M 2 2 2Js J N} k Let us define the Besov space B s 2, = {f = j= 1 k β jk ψ j,k, sup J N 2 Js [ k β 2 Jk] 1/2 < } V (s, M) is then equivalent to a ball of this space. 36

37 Wavelet bases and Besov spaces f B s 2, (M) = {f = j= 1 k β jk ψ j,k, sup J N 2 Js [ k β 2 Jk] 1/2 M} J s, 2 J s = [n] 1 1+2s sup E ˆf J f 2 c n 2s 1+2s. f B2, s (M) 37

38 Moreover inf Est Minimax lower bound Hence Lower bounds sup E Est f 2 f B2, s (M) 2 c 0 n 2s 1+2s sup E ˆf J f B2, s (M) s f 2 c n 2s 1+2s says that ˆf J s is rate optimal over B2, (M) s but Js, 2 J s 2 = n 1+2s depends on s 38

39 Wavelet bases and Besov spaces More generally, B s p, = {f = j= 1 k β jk ψ j,k, sup j N 2 j(s+12 1p ) [ k β jk p ] 1/p < } and B s p,q = {f = β jk ψ j,k, 2 j(s p )q [ k j N k j= 1 β jk p ] q/p < } B s, = {f = j= 1 k β jk ψ j,k, sup2 j(s+ 1 2 ) [sup β jk < } j N k 39

40 Besov spaces versus Hölder spaces V α (L) = {f : [0, 1] R, sup f(x) f(y) Lδ α, δ, f(0) L} x y δ f V α (L) = β jk M 2 2 j(α+1 α 2 ) j = f B, α = f B2,. α i.e. V α B α, B α 2, 40

41 Besov spaces : Remarks Wavelets Other wavelets exist, compactly supported or not, generally more regular than the Haar wavelets, see HKPT Chapters 5, 6, 7, 8 Besov spaces There are conditions on the wavelets ensuring that the Besov defined earlier do coincide (see HKPT, Theorem 9.4 p 119). Sparsity The Besov conditions are among conditions called sparsity conditions (meaning essentially that in the representation of f, only few coefficients are meaningful. 41

42 Besov spaces : Embeddings q q B s p,q B s p,q (comparison of l q norms) p p B s p,q B s p,q if s 1 p = s 1 p (comparison of l p norms) p p, compactly supported case B s p,q B s p,q (Convexity inequalities) [Compactly supported case :] B s p,q B s [ 1 p 1 p ] + p,q 42

43 Thresholding estimates in wavelet systems 43

44 Thresholding Estimators : ˆf = J j= 1 k {0,...,2 j 1} 2 J = n log n ˆβ jk = 1 n ψ j,k (X i ) n ˆβ jk = 1 n ˆβ jk = i=0 i=0 1 0 n ψ j,k (X i )Y i ψ j,k (t)dy ǫ t log n ˆβ jk I{ ˆβ jk κ n }ψ j,k density regression white noise 44

45 Thresholding = Adaptation s > s 0 sup E ˆf f 2 f B, s (M) 2 c log n n 2s 1+2s sup E ˆf f 2 f B2, s (M) 2 c log n n 2s 1+2s. 45

46 Thresholding = Adaptation, General result Theorem 1. For 1 p, 1 r, π 1, κ κ 0, s > 1 p, there exists some constants c p (M) such that, s < π 2 (1 p 1 π ) +, s π 2 (1 p 1 π ) +, [ ] (s p 1 + π 1 )π sup E ˆf f n π π c (M) log n δ(s,p,q) 2s p 2 +1 f f B s log n p,r [ ] sπ sup E ˆf n f π π c p (M) log n δ(s,p,q) 2s+1 f f B s log n p,r 46

47 Sequence space models, thresholding estimates 47

48 White noise model dy t = f(t)dt + ǫdw t, t [0, 1], f L 2 ([0, 1], dt) E = {ψ i, i N} orthonormal basis of L 2 ([0, 1], dt), f = θ i ψ i, x i = ψ i dy, i N, x i = θ i + ǫv i, i N where the v i s are iid N(0, 1) and θ = (θ i ) i N l 2 (N). 48

49 General Framework : f = i N θ ie i (unknown) is randomly observed meaning we can estimate θ i by ˆθ i n for i Λn with the following properties : 49

50 E n f ˆθ i n θi 2p Cσ 2p i c(n) 2p, ) n P n ( ˆθ i θi κσ i c(n)/2 Cc(n) 2p c(n) 4, Λ n, c(n) 0 50

51 Examples c(n) = log(n)/n, Λ n = c(n) 2 is the most common choice. Density estimation model ( Donoho, Johnstone, K., P. ) for wavelet bases, ˆ θ jk = 1 n n ψ jk (X i ) Regression model i=1 51

52 Examples More delicate models (wavelet bases) : Stationary processes, Evolutionary spectra (Neumann and von Sachs 1997), Locally stationary processes (Donoho, Mallat and von Sachs 1998, and Mallat, Papanicolaou and Zhang 1998), Partially observed diffusion models (Hoffmann 1999 ), Multivariate extensions with t [0, 1] d (Donoho 1997, Neumann 1998), Markov chains models (hidden or not) (Clemençon 2000). 52

53 Thresholding Estimators : fˆ n = n ˆθi 1( n ˆθi κc(n)) e i. i Λ n 53

54 Maxisets Definition 1. Let us define the maxiset associated with the sequence ˆq n, the loss function ρ, the rate α n and the constant T as the following set : MS(ˆq n, ρ, α n )(T) := {θ Θ, sup n E n θ ρ(ˆq n, q(θ))(α n ) 1 T } Examples For parametric regular sequences of models, we generally have MS(ˆq n, ρ, n 1/2 )(T) = Θ for various loss functions and large enough constant T. 54

55 Density Estimation :linear kernel methods X 1....,X n i.i.d. f. ρ( ˆf n, f) = ˆf n f p p. Θ = {f, density, f p R}, Ê j(n) (x) := 1 n n i=1 E j(n)(x, X i ). j(n) : 2 j(n) = n (1 α), with α (0, 1) 55

56 MS(Ê j(n), f, n αp/2 ) :=: Θ B s,p s = α 2(1 α) or, α = 2s 1+2s Kerkyacharian, P. Stat. Proba. Letters (1993) 56

57 MS(Ê j(n), f, α n ) :=: Θ B s,p (i) For any T, there exists M such that MS(Ê j(n), f, α n )(T) Θ B s,p, (M) (ii) For any M, there exists T such that MS(Ê j(n), f, α n )(T) Θ B s,p, (M) 57

58 Maxisets for Thresholding estimators : Model : general framework : f randomly observed. ρ( ˆf n, f) = ˆf n f p p. Θ = {f, f p R}, fˆ n = n ˆθi 1( n ˆθi σi κc(n)) e i. i Λ n 58

59 p = 2, e i is an ordinary orthonormal basis l q, (E) = {f = θ n e n X, supλ q card{n; θ n > λ} < } λ>0 For 0 < s, 0 < r 2, Λ n = c(n) r MS( ˆf n, f, c(n) α ) :=: l q, (E) B α r,2 α = 2s 1+2s, s = 1 q

60 2001 Cohen, DeVore, Kerkyacharian, P. B s,p = {f L p, sup j 0 2 js E j f f p < } B u,2 = {f = θ i e i L 2, sup n u n 1 θ i e i 2 < } i=n 60

61 B u,p = {f = θ i e i L p, sup n u n 1 θ i e i p < } i=n 61

Adaptive Wavelet Estimation: A Block Thresholding and Oracle Inequality Approach

Adaptive Wavelet Estimation: A Block Thresholding and Oracle Inequality Approach University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1999 Adaptive Wavelet Estimation: A Block Thresholding and Oracle Inequality Approach T. Tony Cai University of Pennsylvania

More information

Minimax Risk: Pinsker Bound

Minimax Risk: Pinsker Bound Minimax Risk: Pinsker Bound Michael Nussbaum Cornell University From: Encyclopedia of Statistical Sciences, Update Volume (S. Kotz, Ed.), 1999. Wiley, New York. Abstract We give an account of the Pinsker

More information

Discussion of Regularization of Wavelets Approximations by A. Antoniadis and J. Fan

Discussion of Regularization of Wavelets Approximations by A. Antoniadis and J. Fan Discussion of Regularization of Wavelets Approximations by A. Antoniadis and J. Fan T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania Professors Antoniadis and Fan are

More information

Minimax lower bounds I

Minimax lower bounds I Minimax lower bounds I Kyoung Hee Kim Sungshin University 1 Preliminaries 2 General strategy 3 Le Cam, 1973 4 Assouad, 1983 5 Appendix Setting Family of probability measures {P θ : θ Θ} on a sigma field

More information

Bickel Rosenblatt test

Bickel Rosenblatt test University of Latvia 28.05.2011. A classical Let X 1,..., X n be i.i.d. random variables with a continuous probability density function f. Consider a simple hypothesis H 0 : f = f 0 with a significance

More information

The Root-Unroot Algorithm for Density Estimation as Implemented. via Wavelet Block Thresholding

The Root-Unroot Algorithm for Density Estimation as Implemented. via Wavelet Block Thresholding The Root-Unroot Algorithm for Density Estimation as Implemented via Wavelet Block Thresholding Lawrence Brown, Tony Cai, Ren Zhang, Linda Zhao and Harrison Zhou Abstract We propose and implement a density

More information

Asymptotic Equivalence and Adaptive Estimation for Robust Nonparametric Regression

Asymptotic Equivalence and Adaptive Estimation for Robust Nonparametric Regression Asymptotic Equivalence and Adaptive Estimation for Robust Nonparametric Regression T. Tony Cai 1 and Harrison H. Zhou 2 University of Pennsylvania and Yale University Abstract Asymptotic equivalence theory

More information

Maximal Spaces with given rate of convergence for thresholding algorithms

Maximal Spaces with given rate of convergence for thresholding algorithms Maximal Spaces with given rate of convergence for thresholding algorithms Albert Cohen, Ronald DeVore, Gerard Kerkyacharian, Dominique Picard June 16, 1999 1 Introduction In recent years, various nonlinear

More information

Wavelet Shrinkage for Nonequispaced Samples

Wavelet Shrinkage for Nonequispaced Samples University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Wavelet Shrinkage for Nonequispaced Samples T. Tony Cai University of Pennsylvania Lawrence D. Brown University

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1

OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 The Annals of Statistics 1997, Vol. 25, No. 6, 2512 2546 OPTIMAL POINTWISE ADAPTIVE METHODS IN NONPARAMETRIC ESTIMATION 1 By O. V. Lepski and V. G. Spokoiny Humboldt University and Weierstrass Institute

More information

Optimal series representations of continuous Gaussian random fields

Optimal series representations of continuous Gaussian random fields Optimal series representations of continuous Gaussian random fields Antoine AYACHE Université Lille 1 - Laboratoire Paul Painlevé A. Ayache (Lille 1) Optimality of continuous Gaussian series 04/25/2012

More information

Thanks. Presentation help: B. Narasimhan, N. El Karoui, J-M Corcuera Grant Support: NIH, NSF, ARC (via P. Hall) p.2

Thanks. Presentation help: B. Narasimhan, N. El Karoui, J-M Corcuera Grant Support: NIH, NSF, ARC (via P. Hall) p.2 p.1 Collaborators: Felix Abramowich Yoav Benjamini David Donoho Noureddine El Karoui Peter Forrester Gérard Kerkyacharian Debashis Paul Dominique Picard Bernard Silverman Thanks Presentation help: B. Narasimhan,

More information

A NEW PROOF OF THE ATOMIC DECOMPOSITION OF HARDY SPACES

A NEW PROOF OF THE ATOMIC DECOMPOSITION OF HARDY SPACES A NEW PROOF OF THE ATOMIC DECOMPOSITION OF HARDY SPACES S. DEKEL, G. KERKYACHARIAN, G. KYRIAZIS, AND P. PETRUSHEV Abstract. A new proof is given of the atomic decomposition of Hardy spaces H p, 0 < p 1,

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) Yale, May 2 2011 p. 1/35 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite).

More information

Non-parametric Inference and Resampling

Non-parametric Inference and Resampling Non-parametric Inference and Resampling Exercises by David Wozabal (Last update 3. Juni 2013) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend

More information

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Rend. Sem. Mat. Univ. Pol. Torino Vol. 57, 1999) L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Abstract. We use an abstract framework to obtain a multilevel decomposition of a variety

More information

Multivariate Topological Data Analysis

Multivariate Topological Data Analysis Cleveland State University November 20, 2008, Duke University joint work with Gunnar Carlsson (Stanford), Peter Kim and Zhiming Luo (Guelph), and Moo Chung (Wisconsin Madison) Framework Ideal Truth (Parameter)

More information

1. Introduction 1.1. Model. In this paper, we consider the following heteroscedastic white noise model:

1. Introduction 1.1. Model. In this paper, we consider the following heteroscedastic white noise model: Volume 14, No. 3 005), pp. Allerton Press, Inc. M A T H E M A T I C A L M E T H O D S O F S T A T I S T I C S BAYESIAN MODELLING OF SPARSE SEQUENCES AND MAXISETS FOR BAYES RULES V. Rivoirard Equipe Probabilités,

More information

Optimal Estimation of a Nonsmooth Functional

Optimal Estimation of a Nonsmooth Functional Optimal Estimation of a Nonsmooth Functional T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania http://stat.wharton.upenn.edu/ tcai Joint work with Mark Low 1 Question Suppose

More information

DISCUSSION: COVERAGE OF BAYESIAN CREDIBLE SETS. By Subhashis Ghosal North Carolina State University

DISCUSSION: COVERAGE OF BAYESIAN CREDIBLE SETS. By Subhashis Ghosal North Carolina State University Submitted to the Annals of Statistics DISCUSSION: COVERAGE OF BAYESIAN CREDIBLE SETS By Subhashis Ghosal North Carolina State University First I like to congratulate the authors Botond Szabó, Aad van der

More information

On the Estimation of the Function and Its Derivatives in Nonparametric Regression: A Bayesian Testimation Approach

On the Estimation of the Function and Its Derivatives in Nonparametric Regression: A Bayesian Testimation Approach Sankhyā : The Indian Journal of Statistics 2011, Volume 73-A, Part 2, pp. 231-244 2011, Indian Statistical Institute On the Estimation of the Function and Its Derivatives in Nonparametric Regression: A

More information

Nonparametric Regression

Nonparametric Regression Adaptive Variance Function Estimation in Heteroscedastic Nonparametric Regression T. Tony Cai and Lie Wang Abstract We consider a wavelet thresholding approach to adaptive variance function estimation

More information

The heat kernel meets Approximation theory. theory in Dirichlet spaces

The heat kernel meets Approximation theory. theory in Dirichlet spaces The heat kernel meets Approximation theory in Dirichlet spaces University of South Carolina with Thierry Coulhon and Gerard Kerkyacharian Paris - June, 2012 Outline 1. Motivation and objectives 2. The

More information

Inverse Statistical Learning

Inverse Statistical Learning Inverse Statistical Learning Minimax theory, adaptation and algorithm avec (par ordre d apparition) C. Marteau, M. Chichignoud, C. Brunet and S. Souchet Dijon, le 15 janvier 2014 Inverse Statistical Learning

More information

Divide and Conquer Kernel Ridge Regression. A Distributed Algorithm with Minimax Optimal Rates

Divide and Conquer Kernel Ridge Regression. A Distributed Algorithm with Minimax Optimal Rates : A Distributed Algorithm with Minimax Optimal Rates Yuchen Zhang, John C. Duchi, Martin Wainwright (UC Berkeley;http://arxiv.org/pdf/1305.509; Apr 9, 014) Gatsby Unit, Tea Talk June 10, 014 Outline Motivation.

More information

I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S (I S B A)

I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S (I S B A) I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S (I S B A) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 011/17 BLOCK-THRESHOLD-ADAPTED

More information

Inverse problems in statistics

Inverse problems in statistics Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/32 Part II 2) Adaptation and oracle inequalities YES, Eurandom, 10 October 2011

More information

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be

Introduction Wavelet shrinage methods have been very successful in nonparametric regression. But so far most of the wavelet regression methods have be Wavelet Estimation For Samples With Random Uniform Design T. Tony Cai Department of Statistics, Purdue University Lawrence D. Brown Department of Statistics, University of Pennsylvania Abstract We show

More information

A new class of pseudodifferential operators with mixed homogenities

A new class of pseudodifferential operators with mixed homogenities A new class of pseudodifferential operators with mixed homogenities Po-Lam Yung University of Oxford Jan 20, 2014 Introduction Given a smooth distribution of hyperplanes on R N (or more generally on a

More information

Wavelet Thresholding for Non Necessarily Gaussian Noise: Functionality

Wavelet Thresholding for Non Necessarily Gaussian Noise: Functionality Wavelet Thresholding for Non Necessarily Gaussian Noise: Functionality By R. Averkamp 1 and C. Houdré 2 Freiburg University and Université Paris XII and Georgia Institute of Technology For signals belonging

More information

Nonparametric Methods

Nonparametric Methods Nonparametric Methods Michael R. Roberts Department of Finance The Wharton School University of Pennsylvania July 28, 2009 Michael R. Roberts Nonparametric Methods 1/42 Overview Great for data analysis

More information

A talk on Oracle inequalities and regularization. by Sara van de Geer

A talk on Oracle inequalities and regularization. by Sara van de Geer A talk on Oracle inequalities and regularization by Sara van de Geer Workshop Regularization in Statistics Banff International Regularization Station September 6-11, 2003 Aim: to compare l 1 and other

More information

Variance Function Estimation in Multivariate Nonparametric Regression

Variance Function Estimation in Multivariate Nonparametric Regression Variance Function Estimation in Multivariate Nonparametric Regression T. Tony Cai 1, Michael Levine Lie Wang 1 Abstract Variance function estimation in multivariate nonparametric regression is considered

More information

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin The Annals of Statistics 1996, Vol. 4, No. 6, 399 430 ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE By Michael Nussbaum Weierstrass Institute, Berlin Signal recovery in Gaussian

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

DENSITY ESTIMATION WITH QUADRATIC LOSS: A CONFIDENCE INTERVALS METHOD. Pierre Alquier 1, 2

DENSITY ESTIMATION WITH QUADRATIC LOSS: A CONFIDENCE INTERVALS METHOD. Pierre Alquier 1, 2 ESAIM: PS July 2008, Vol. 12, p. 438 463 DOI: 10.1051/ps:2007050 ESAIM: Probability and Statistics www.esaim-ps.org DESITY ESTIMATIO WITH QUADRATIC LOSS: A COFIDECE ITERVALS METHOD Pierre Alquier 1, 2

More information

Asymptotically sufficient statistics in nonparametric regression experiments with correlated noise

Asymptotically sufficient statistics in nonparametric regression experiments with correlated noise Vol. 0 0000 1 0 Asymptotically sufficient statistics in nonparametric regression experiments with correlated noise Andrew V Carter University of California, Santa Barbara Santa Barbara, CA 93106-3110 e-mail:

More information

Nonparametric Estimation: Part I

Nonparametric Estimation: Part I Nonparametric Estimation: Part I Regression in Function Space Yanjun Han Department of Electrical Engineering Stanford University yjhan@stanford.edu November 13, 2015 (Pray for Paris) Outline 1 Nonparametric

More information

Bayesian Regularization

Bayesian Regularization Bayesian Regularization Aad van der Vaart Vrije Universiteit Amsterdam International Congress of Mathematicians Hyderabad, August 2010 Contents Introduction Abstract result Gaussian process priors Co-authors

More information

How much should we rely on Besov spaces as a framework for the mathematical study of images?

How much should we rely on Besov spaces as a framework for the mathematical study of images? How much should we rely on Besov spaces as a framework for the mathematical study of images? C. Sinan Güntürk Princeton University, Program in Applied and Computational Mathematics Abstract Relations between

More information

12 - Nonparametric Density Estimation

12 - Nonparametric Density Estimation ST 697 Fall 2017 1/49 12 - Nonparametric Density Estimation ST 697 Fall 2017 University of Alabama Density Review ST 697 Fall 2017 2/49 Continuous Random Variables ST 697 Fall 2017 3/49 1.0 0.8 F(x) 0.6

More information

Minimum Hellinger Distance Estimation in a. Semiparametric Mixture Model

Minimum Hellinger Distance Estimation in a. Semiparametric Mixture Model Minimum Hellinger Distance Estimation in a Semiparametric Mixture Model Sijia Xiang 1, Weixin Yao 1, and Jingjing Wu 2 1 Department of Statistics, Kansas State University, Manhattan, Kansas, USA 66506-0802.

More information

A Data-Driven Block Thresholding Approach To Wavelet Estimation

A Data-Driven Block Thresholding Approach To Wavelet Estimation A Data-Driven Block Thresholding Approach To Wavelet Estimation T. Tony Cai 1 and Harrison H. Zhou University of Pennsylvania and Yale University Abstract A data-driven block thresholding procedure for

More information

arxiv: v4 [stat.me] 27 Nov 2017

arxiv: v4 [stat.me] 27 Nov 2017 CLASSIFICATION OF LOCAL FIELD POTENTIALS USING GAUSSIAN SEQUENCE MODEL Taposh Banerjee John Choi Bijan Pesaran Demba Ba and Vahid Tarokh School of Engineering and Applied Sciences, Harvard University Center

More information

Some analysis problems 1. x x 2 +yn2, y > 0. g(y) := lim

Some analysis problems 1. x x 2 +yn2, y > 0. g(y) := lim Some analysis problems. Let f be a continuous function on R and let for n =,2,..., F n (x) = x (x t) n f(t)dt. Prove that F n is n times differentiable, and prove a simple formula for its n-th derivative.

More information

Uniform Confidence Sets for Nonparametric Regression with Application to Cosmology

Uniform Confidence Sets for Nonparametric Regression with Application to Cosmology Uniform Confidence Sets for Nonparametric Regression with Application to Cosmology Christopher R. Genovese Department of Statistics Carnegie Mellon University http://www.stat.cmu.edu/ ~ genovese/ Larry

More information

Persistent homology and nonparametric regression

Persistent homology and nonparametric regression Cleveland State University March 10, 2009, BIRS: Data Analysis using Computational Topology and Geometric Statistics joint work with Gunnar Carlsson (Stanford), Moo Chung (Wisconsin Madison), Peter Kim

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

ECO Class 6 Nonparametric Econometrics

ECO Class 6 Nonparametric Econometrics ECO 523 - Class 6 Nonparametric Econometrics Carolina Caetano Contents 1 Nonparametric instrumental variable regression 1 2 Nonparametric Estimation of Average Treatment Effects 3 2.1 Asymptotic results................................

More information

A Lower Bound Theorem. Lin Hu.

A Lower Bound Theorem. Lin Hu. American J. of Mathematics and Sciences Vol. 3, No -1,(January 014) Copyright Mind Reader Publications ISSN No: 50-310 A Lower Bound Theorem Department of Applied Mathematics, Beijing University of Technology,

More information

Outline of Fourier Series: Math 201B

Outline of Fourier Series: Math 201B Outline of Fourier Series: Math 201B February 24, 2011 1 Functions and convolutions 1.1 Periodic functions Periodic functions. Let = R/(2πZ) denote the circle, or onedimensional torus. A function f : C

More information

Bayesian Adaptation. Aad van der Vaart. Vrije Universiteit Amsterdam. aad. Bayesian Adaptation p. 1/4

Bayesian Adaptation. Aad van der Vaart. Vrije Universiteit Amsterdam.  aad. Bayesian Adaptation p. 1/4 Bayesian Adaptation Aad van der Vaart http://www.math.vu.nl/ aad Vrije Universiteit Amsterdam Bayesian Adaptation p. 1/4 Joint work with Jyri Lember Bayesian Adaptation p. 2/4 Adaptation Given a collection

More information

recent developments of approximation theory and greedy algorithms

recent developments of approximation theory and greedy algorithms recent developments of approximation theory and greedy algorithms Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Reduced Order Modeling in

More information

21.2 Example 1 : Non-parametric regression in Mean Integrated Square Error Density Estimation (L 2 2 risk)

21.2 Example 1 : Non-parametric regression in Mean Integrated Square Error Density Estimation (L 2 2 risk) 10-704: Information Processing and Learning Spring 2015 Lecture 21: Examples of Lower Bounds and Assouad s Method Lecturer: Akshay Krishnamurthy Scribes: Soumya Batra Note: LaTeX template courtesy of UC

More information

Block thresholding for density estimation: local and global adaptivity

Block thresholding for density estimation: local and global adaptivity Journal of Multivariate Analysis 95 005) 76 106 www.elsevier.com/locate/jmva Bloc thresholding for density estimation: local and global adaptivity Eric Chicen a,,t.tonycai b,1 a Department of Statistics,

More information

Nonparametric Estimation of Functional-Coefficient Autoregressive Models

Nonparametric Estimation of Functional-Coefficient Autoregressive Models Nonparametric Estimation of Functional-Coefficient Autoregressive Models PEDRO A. MORETTIN and CHANG CHIANN Department of Statistics, University of São Paulo Introduction Nonlinear Models: - Exponential

More information

Singular Integrals. 1 Calderon-Zygmund decomposition

Singular Integrals. 1 Calderon-Zygmund decomposition Singular Integrals Analysis III Calderon-Zygmund decomposition Let f be an integrable function f dx 0, f = g + b with g Cα almost everywhere, with b

More information

1.1 Basis of Statistical Decision Theory

1.1 Basis of Statistical Decision Theory ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 1: Introduction Lecturer: Yihong Wu Scribe: AmirEmad Ghassami, Jan 21, 2016 [Ed. Jan 31] Outline: Introduction of

More information

Uniform Confidence Sets for Nonparametric Regression with Application to Cosmology

Uniform Confidence Sets for Nonparametric Regression with Application to Cosmology Uniform Confidence Sets for Nonparametric Regression with Application to Cosmology Christopher R. Genovese Department of Statistics Carnegie Mellon University http://www.stat.cmu.edu/ ~ genovese/ Larry

More information

Estimation of the functional Weibull-tail coefficient

Estimation of the functional Weibull-tail coefficient 1/ 29 Estimation of the functional Weibull-tail coefficient Stéphane Girard Inria Grenoble Rhône-Alpes & LJK, France http://mistis.inrialpes.fr/people/girard/ June 2016 joint work with Laurent Gardes,

More information

Lecture 35: December The fundamental statistical distances

Lecture 35: December The fundamental statistical distances 36-705: Intermediate Statistics Fall 207 Lecturer: Siva Balakrishnan Lecture 35: December 4 Today we will discuss distances and metrics between distributions that are useful in statistics. I will be lose

More information

TD 1: Hilbert Spaces and Applications

TD 1: Hilbert Spaces and Applications Université Paris-Dauphine Functional Analysis and PDEs Master MMD-MA 2017/2018 Generalities TD 1: Hilbert Spaces and Applications Exercise 1 (Generalized Parallelogram law). Let (H,, ) be a Hilbert space.

More information

group. redshifts. However, the sampling is fairly complete out to about z = 0.2.

group. redshifts. However, the sampling is fairly complete out to about z = 0.2. Non parametric Inference in Astrophysics Larry Wasserman, Chris Miller, Bob Nichol, Chris Genovese, Woncheol Jang, Andy Connolly, Andrew Moore, Jeff Schneider and the PICA group. 1 We discuss non parametric

More information

ORTHOGONAL SERIES REGRESSION ESTIMATORS FOR AN IRREGULARLY SPACED DESIGN

ORTHOGONAL SERIES REGRESSION ESTIMATORS FOR AN IRREGULARLY SPACED DESIGN APPLICATIONES MATHEMATICAE 7,3(000), pp. 309 318 W.POPIŃSKI(Warszawa) ORTHOGONAL SERIES REGRESSION ESTIMATORS FOR AN IRREGULARLY SPACED DESIGN Abstract. Nonparametric orthogonal series regression function

More information

Minimax theory for a class of non-linear statistical inverse problems

Minimax theory for a class of non-linear statistical inverse problems Minimax theory for a class of non-linear statistical inverse problems Kolyan Ray and Johannes Schmidt-Hieber Leiden University Abstract We study a class of statistical inverse problems with non-linear

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Nonparametric Inference in Cosmology and Astrophysics: Biases and Variants

Nonparametric Inference in Cosmology and Astrophysics: Biases and Variants Nonparametric Inference in Cosmology and Astrophysics: Biases and Variants Christopher R. Genovese Department of Statistics Carnegie Mellon University http://www.stat.cmu.edu/ ~ genovese/ Collaborators:

More information

Wavelet deconvolution in a periodic setting

Wavelet deconvolution in a periodic setting J. R. Statist. Soc. B () 66, Part 3, pp. 57 573 Wavelet deconvolution in a periodic setting Iain M. Johnstone, Stanford University, USA Gérard Kerkyacharian, Centre National de la Recherche Scientifique

More information

RIDGELETS: ESTIMATING WITH RIDGE FUNCTIONS 1. BY EMMANUEL J. CANDÈS Stanford University

RIDGELETS: ESTIMATING WITH RIDGE FUNCTIONS 1. BY EMMANUEL J. CANDÈS Stanford University The Annals of Statistics 2003, Vol. 31, No. 5, 1561 1599 Institute of Mathematical Statistics, 2003 RIDGELETS: ESTIMATING WITH RIDGE FUNCTIONS 1 BY EMMANUEL J. CANDÈS Stanford University Feedforward neural

More information

Research Statement. Harrison H. Zhou Cornell University

Research Statement. Harrison H. Zhou Cornell University Research Statement Harrison H. Zhou Cornell University My research interests and contributions are in the areas of model selection, asymptotic decision theory, nonparametric function estimation and machine

More information

Statistical Methods for SVM

Statistical Methods for SVM Statistical Methods for SVM Support Vector Machines Here we approach the two-class classification problem in a direct way: We try and find a plane that separates the classes in feature space. If we cannot,

More information

Acceleration of some empirical means. Application to semiparametric regression

Acceleration of some empirical means. Application to semiparametric regression Acceleration of some empirical means. Application to semiparametric regression François Portier Université catholique de Louvain - ISBA November, 8 2013 In collaboration with Bernard Delyon Regression

More information

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture : Mutual Information Method Lecturer: Yihong Wu Scribe: Jaeho Lee, Mar, 06 Ed. Mar 9 Quick review: Assouad s lemma

More information

Non linear estimation in anisotropic multiindex denoising II

Non linear estimation in anisotropic multiindex denoising II Non linear estimation in anisotropic multiindex denoising II Gérard Kerkyacharian, Oleg Lepski, Dominique Picard Abstract In dimension one, it has long been observed that the minimax rates of convergences

More information

Wavelets and modular inequalities in variable L p spaces

Wavelets and modular inequalities in variable L p spaces Wavelets and modular inequalities in variable L p spaces Mitsuo Izuki July 14, 2007 Abstract The aim of this paper is to characterize variable L p spaces L p( ) (R n ) using wavelets with proper smoothness

More information

Nonlinear tensor product approximation

Nonlinear tensor product approximation ICERM; October 3, 2014 1 Introduction 2 3 4 Best multilinear approximation We are interested in approximation of a multivariate function f (x 1,..., x d ) by linear combinations of products u 1 (x 1 )

More information

Kernel Density Estimation

Kernel Density Estimation EECS 598: Statistical Learning Theory, Winter 2014 Topic 19 Kernel Density Estimation Lecturer: Clayton Scott Scribe: Yun Wei, Yanzhen Deng Disclaimer: These notes have not been subjected to the usual

More information

Ideal denoising within a family of tree-structured wavelet estimators

Ideal denoising within a family of tree-structured wavelet estimators Electronic Journal of Statistics Vol. 5 (2011) 829 855 ISSN: 1935-7524 DOI: 10.1214/11-EJS628 Ideal denoising within a family of tree-structured wavelet estimators Florent Autin Université d Aix-Marseille

More information

Wavelets and numerical methods. Politecnico di Torino Corso Duca degli Abruzzi, 24 Torino, 10129, Italia

Wavelets and numerical methods. Politecnico di Torino Corso Duca degli Abruzzi, 24 Torino, 10129, Italia Wavelets and numerical methods Claudio Canuto Anita Tabacco Politecnico di Torino Corso Duca degli Abruzzi, 4 Torino, 9, Italia 7 Wavelets and Numerical methods Claudio Canuto http://calvino.polito.it/~ccanuto

More information

Wavelet-based density estimation in a heteroscedastic convolution model

Wavelet-based density estimation in a heteroscedastic convolution model Noname manuscript No. (will be inserted by the editor Wavelet-based density estimation in a heteroscedastic convolution model Christophe Chesneau Jalal Fadili Received: Abstract We consider a heteroscedastic

More information

MATH 5640: Fourier Series

MATH 5640: Fourier Series MATH 564: Fourier Series Hung Phan, UMass Lowell September, 8 Power Series A power series in the variable x is a series of the form a + a x + a x + = where the coefficients a, a,... are real or complex

More information

AALBORG UNIVERSITY. Compactly supported curvelet type systems. Kenneth N. Rasmussen and Morten Nielsen. R November 2010

AALBORG UNIVERSITY. Compactly supported curvelet type systems. Kenneth N. Rasmussen and Morten Nielsen. R November 2010 AALBORG UNIVERSITY Compactly supported curvelet type systems by Kenneth N Rasmussen and Morten Nielsen R-2010-16 November 2010 Department of Mathematical Sciences Aalborg University Fredrik Bajers Vej

More information

Sobolev Spaces. Chapter Hölder spaces

Sobolev Spaces. Chapter Hölder spaces Chapter 2 Sobolev Spaces Sobolev spaces turn out often to be the proper setting in which to apply ideas of functional analysis to get information concerning partial differential equations. Here, we collect

More information

Bayesian Nonparametric Point Estimation Under a Conjugate Prior

Bayesian Nonparametric Point Estimation Under a Conjugate Prior University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 5-15-2002 Bayesian Nonparametric Point Estimation Under a Conjugate Prior Xuefeng Li University of Pennsylvania Linda

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

Density estimators for the convolution of discrete and continuous random variables

Density estimators for the convolution of discrete and continuous random variables Density estimators for the convolution of discrete and continuous random variables Ursula U Müller Texas A&M University Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract

More information

Function Spaces. 1 Hilbert Spaces

Function Spaces. 1 Hilbert Spaces Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure

More information

NONPARAMETRIC DENSITY ESTIMATION WITH RESPECT TO THE LINEX LOSS FUNCTION

NONPARAMETRIC DENSITY ESTIMATION WITH RESPECT TO THE LINEX LOSS FUNCTION NONPARAMETRIC DENSITY ESTIMATION WITH RESPECT TO THE LINEX LOSS FUNCTION R. HASHEMI, S. REZAEI AND L. AMIRI Department of Statistics, Faculty of Science, Razi University, 67149, Kermanshah, Iran. ABSTRACT

More information

Aggregation of Spectral Density Estimators

Aggregation of Spectral Density Estimators Aggregation of Spectral Density Estimators Christopher Chang Department of Mathematics, University of California, San Diego, La Jolla, CA 92093-0112, USA; chrchang@alumni.caltech.edu Dimitris Politis Department

More information

Fast learning rates for plug-in classifiers under the margin condition

Fast learning rates for plug-in classifiers under the margin condition Fast learning rates for plug-in classifiers under the margin condition Jean-Yves Audibert 1 Alexandre B. Tsybakov 2 1 Certis ParisTech - Ecole des Ponts, France 2 LPMA Université Pierre et Marie Curie,

More information

A DATA-DRIVEN BLOCK THRESHOLDING APPROACH TO WAVELET ESTIMATION

A DATA-DRIVEN BLOCK THRESHOLDING APPROACH TO WAVELET ESTIMATION The Annals of Statistics 2009, Vol. 37, No. 2, 569 595 DOI: 10.1214/07-AOS538 Institute of Mathematical Statistics, 2009 A DATA-DRIVEN BLOCK THRESHOLDING APPROACH TO WAVELET ESTIMATION BY T. TONY CAI 1

More information

(a). Bumps (b). Wavelet Coefficients

(a). Bumps (b). Wavelet Coefficients Incorporating Information on Neighboring Coecients into Wavelet Estimation T. Tony Cai Bernard W. Silverman Department of Statistics Department of Mathematics Purdue University University of Bristol West

More information

ON BLOCK THRESHOLDING IN WAVELET REGRESSION: ADAPTIVITY, BLOCK SIZE, AND THRESHOLD LEVEL

ON BLOCK THRESHOLDING IN WAVELET REGRESSION: ADAPTIVITY, BLOCK SIZE, AND THRESHOLD LEVEL Statistica Sinica 12(2002), 1241-1273 ON BLOCK THRESHOLDING IN WAVELET REGRESSION: ADAPTIVITY, BLOCK SIZE, AND THRESHOLD LEVEL T. Tony Cai University of Pennsylvania Abstract: In this article we investigate

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

Nonparametric Modal Regression

Nonparametric Modal Regression Nonparametric Modal Regression Summary In this article, we propose a new nonparametric modal regression model, which aims to estimate the mode of the conditional density of Y given predictors X. The nonparametric

More information

An Introduction to Wavelets and some Applications

An Introduction to Wavelets and some Applications An Introduction to Wavelets and some Applications Milan, May 2003 Anestis Antoniadis Laboratoire IMAG-LMC University Joseph Fourier Grenoble, France An Introduction to Wavelets and some Applications p.1/54

More information

Nonparametric Function Estimation with Infinite-Order Kernels

Nonparametric Function Estimation with Infinite-Order Kernels Nonparametric Function Estimation with Infinite-Order Kernels Arthur Berg Department of Statistics, University of Florida March 15, 2008 Kernel Density Estimation (IID Case) Let X 1,..., X n iid density

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information