Nonparametric estimation under Shape Restrictions
|
|
- Harold Wilkins
- 6 years ago
- Views:
Transcription
1 Nonparametric estimation under Shape Restrictions Jon A. Wellner University of Washington, Seattle Statistical Seminar, Frejus, France August 30 - September 3, 2010
2 Outline: Five Lectures on Shape Restrictions L1: Monotone functions: maximum likelihood and least squares L2: Optimality of the MLE of a monotone density (and comparisons?) L3: Estimation of convex and k monotone density functions L4: Estimation of log-concave densities: d = 1 and beyond L5: More on higher dimensions and some open problems Statistical Seminar, Fréjus 2.1
3 Outline: Lecture 2 A: Local asymptotic minimax lower bounds B: Lower bounds for estimation of a monotone density Several scenarios C: D: E: Global lower bounds and upper bounds (briefly) Lower bounds for estimation of a convex density Lower bounds for estimation of a log-concave density Statistical Seminar, Fréjus 2.2
4 A. Local asymptotic minimax lower bounds Proposition. (Two-point lower bound) Let P be a set of probability measures on a measurable space (X, A), and let ν be a real-valued function defined on P. Moreover, let l : [0, ) [0, ) be an increasing convex loss function with l(0) = 0. Then, for any P 1, P 2 P such that H(P 1, P 2 ) < 1 and with E n,i f(x 1,..., X n ) = E n,i f(x) = for i = 1, 2, it follows that f(x 1,..., x n )dp i (x 1 ) dp i (x n ), f(x)dp n i (x) inf max { E n,1 l( T n ν(p 1 ) ), E n,2 l( T n ν(p 2 ) ) } (1) T n ( ) 1 l 4 ν(p 1) ν(p 2 ) {1 H 2 (P 1, P 2 )} 2n. Statistical Seminar, Fréjus 2.3
5 A. Local asymptotic minimax lower bounds Proof. By Jensen s inequality E n,i l( T n ν(p i ) ) l(e n,i T n ν(p i ) ), i = 1, 2, and hence the left side of (1) is bounded below by ( l inf max{e n,1 T n ν(p 1 ), E n,2 T n ν(p 2 ) T n Thus it suffices to prove the proposition for l(x) = x. Let p 1 dp 1 /(d(p 1 + P 2 ), p 2 = dp 2 /d(p 1 + P 2 ), and µ = P 1 + P 2 (or let p i be the density of P i with respect to some other convenient dominating measure µ, i = 1, 2). ). Statistical Seminar, Fréjus 2.4
6 A. Local asymptotic minimax lower bounds Two Facts: Fact 1: Suppose P, Q abs. cont. wrt µ, H 2 (P, Q) 2 1 { p q} 2 dµ = 1 pqdµ 1 ρ(p, Q). Then (1 H 2 (P, Q)) 2 1 { 1 } 2 (p q) dµ 2 (p q) dµ. Fact 2: If P and Q are two probability measures on a measurable space (X, A) and P n and Q n denote the corresponding product measures on (X n, A n ) (of X 1,..., X n i.i.d. as P or Q respectively), then ρ(p, Q) pqdµ satisfies Exercise. Prove Fact 1. Exercise. Prove Fact 2. ρ(p n, Q n ) = ρ(p, Q) n. (2) Statistical Seminar, Fréjus 2.5
7 A. Local asymptotic minimax lower bounds max { E n,1 T n ν(p 1 ), E n,2 T n ν(p 2 ) } 1 2 = { En,1 T n ν(p 1 ) + E n,2 T n ν(p 2 ) } T n (x) ν(p 1 ) + n i=1 T n (x) ν(p 2 ) p 1 (x i )dµ(x 1 ) dµ(x n ) n i=1 [ T n (x) ν(p 1 ) + T n (x) ν(p 2 ) ] 1 2 ν(p 1) ν(p 2 ) n i=1 p 1 (x i ) p 2 (x i )dµ(x 1 ) dµ(x n ) n i=1 n i=1 p 1 (x i ) n i=1 p 2 (x i )dµ(x 1 ) dµ(x n ) 1 4 ν(p 1) ν(p 2 ) {1 H 2 (P1 n, P 2 n )}2 by Fact 1 = 1 4 ν(p 1) ν(p 2 ) {1 H 2 (P 1, P 2 )} 2n by Fact 2. p 2 (x i )dµ(x 1 ) Statistical Seminar, Fréjus 2.6
8 Several scenarios, estimation of f(x 0 ): S1 When f(x 0 ) > 0, f (x 0 ) < 0. S2 When x 0 (a, b) with f(x) constant on (a, b). In particular, f(x) = 1 [0,1] (x), x 0 (0, 1). S3 When f is discontinuous at x 0. S4 When f (j) (x 0 ) = 0 for j = 1,..., k 1, f (k) (x 0 ) 0. Statistical Seminar, Fréjus 2.7
9 Statistical Seminar, Fréjus 2.8
10 Statistical Seminar, Fréjus 2.9
11 Statistical Seminar, Fréjus 2.10
12 Statistical Seminar, Fréjus 2.11
13 S1: f 0 (x 0 ) > 0, f 0 (x 0) < 0. Suppose that we want to estimate ν(f) = f(x 0 ) for a fixed Let f 0 be the density corresponding to P 0, and suppose that f 0 (x 0) < 0. To apply our two-point lower bound Proposition we need to construct a sequence of densities f n that are near f 0 in the sense that for some constant A, and nh 2 (f n, f 0 ) A ν(f n ) ν(f 0 ) = b 1 n where b n. Hence we will try the following choice of f n. For c > 0, define f n (x) = f 0 (x) if x x 0 cn 1/3 or x > x 0 + cn 1/3, f 0 (x 0 cn 1/3 ) if x 0 cn 1/3 < x x 0, f 0 (x 0 + cn 1/3 ) if x 0 < x x 0 + cn 1/3. Statistical Seminar, Fréjus 2.12
14 Statistical Seminar, Fréjus 2.13
15 Statistical Seminar, Fréjus 2.14
16 It is easy to see that n 1/3 ν(f n ) ν(f 0 ) = n 1/3 (f 0 (x 0 cn 1/3 ) f 0 (x 0 )) f 0 (x 0) c (3) On the other hand some calculation shows that H 2 (p n, p 0 ) = 1 2 = 1 2 = 1 2 [ 0 [ f n (x) f n (x) 0 x0 +cn 1/3 [ f 0 (x)] 2 dx f 0 (x)] 2 [ f n (x) + f 0 (x)] 2 dx f n (x) + f 0 (x)] 2 [f n (x) f 0 (x)] 2 x 0 cn 1/3 [ f n (x) + f dx 0 (x)] 2 f 0 (x 0) 2 c 3 4f 0 (x 0 ) 3n. Statistical Seminar, Fréjus 2.15
17 Now we can combine these two pieces with our two-point lower bound Proposition to find that, for any estimator T n of ν(f) = f(x 0 ) and the loss function l(x) = x we have inf max { E n n 1/3 T n ν(f n ), E 0 n 1/3 T n ν(f 0 ) } T n 1 4 n1/3 (ν(f n ) ν(f 0 )) { 1 nh2 (f n, f 0 ) n = 1 4 n1/3 (f 0 (x 0 cn 1/3 ) f 0 (x 0 )) 1 4 f 0 (x 0) cexp ( 2 f 0 (x 0) 2 12f 0 (x 0 ) c3 ) { } 2n 1 nh2 (f n, f 0 ) n = 1 4 f 0 (x 0) cexp ( } 2n f 0 (x 0) 2 6f 0 (x 0 ) c3 ) Statistical Seminar, Fréjus 2.16
18 We now choose c to maximize the quantity on the right side. It is easily seen that the maximum is achieved when This yields c = c 0 ( ) 1/3 2f0 (x 0 ) f 0 (x 0) 2. lim inf n inf max { E n n 1/3 T n ν(f n ), E 0 n 1/3 T n ν(f 0 ) } T n e 1/3 ( 2 f 4 0 (x 0 ) f 0 (x 0 ) ) 1/3. This lower bound has the appropriate structure in the sense that the (nonparametric) MLE of f, f n (x 0 ) converges at rate n 1/3 and it has the same dependence on f 0 (x 0 ) and f 0 (x 0) as does the MLE. Statistical Seminar, Fréjus 2.17
19 Furthermore, note that for n sufficiently large sup E f T n ν(f) f:h(f,f 0 ) Cn 1/2 max { E n n 1/3 T n ν(f n ), E 0 n 1/3 T n ν(f 0 ) } if C 2 > 2A 2f 0 (x 0) 2 c 3 0 /(12f 0(x 0 )), and hence we conclude that lim inf n inf T n e 1/3 4 = e 1/3 4 2/3 sup E f T n ν(f) f:h(f,f 0 ) Cn 1/2 ( 2 f 0 (x 0 ) f 0 (x 0 ) ) 1/3 ( 2 1 f 0 (x 0) f 0 (x 0 ) ) 1/3 for all C sufficiently large. Comparison of E S(0) with e 1/3 42/3 = ? From Groeneboom and Wellner (2001), E S(0) = 2E Z = 2( ) = Statistical Seminar, Fréjus 2.18
20 S2: x 0 (a, b) with f 0 (x) = f 0 (x 0 ) > 0 for all x (a, b). To apply our two-point lower bound Proposition we again need to construct a sequence of densities f n that are near f 0 in the sense that nh 2 (f n, f 0 ) A for some constant A, and ν(f n ) ν(f 0 ) = b 1 n where b n. In this scenario we define a sequence of densities {f n } by where f n (x) = f 0 (x), x a n f 0 (x) + c b a n x 0 a, a n < x x 0 f 0 (x) c b a n x 0 < x < b n b x 0 f 0 (x), b b n. a n sup{x : f 0 (x) f 0 (x 0 ) + cn 1/2 (b a)/(x 0 a)} b n inf{x : f 0 (x) < f 0 (x 0 ) cn 1/2 (b a)/(b x 0 )}. The intervals (a n, a) and (b, b n ) may be empty if f(a ) > f(a+) and/or f(b+) < f(b ) and n is large. Statistical Seminar, Fréjus 2.19
21 Statistical Seminar, Fréjus 2.20
22 It is easy to see that n ν(fn ) ν(f 0 ) = n f n (x 0 ) f 0 (x 0 ) = c b a x 0 a (4) On the other hand some calculation shows that H 2 (f n, f 0 ) c2 (b a) 2 4nf 0 (x 0 ) = { 1 x 0 a + 1 b x 0 c 2 (b a) 3 4nf 0 (x 0 )(x 0 a)(b x 0 ). } Statistical Seminar, Fréjus 2.21
23 Combining these two pieces with the two-point lower bound Proposition we find that, in scenario 2, for any estimator T n of ν(f) = f(x 0 ) and the loss function l(x) = x we have inf max { E n n Tn ν(f n ), E 0 n Tn ν(f 0 ) } T n 1 4 n(ν(f n ) ν(f 0 )) = 1 4 c b a x 0 a { 1 4 c b a x 0 a exp Ac exp( Bc 2 ) { 1 nh2 (f n, f 0 ) n } 2n 1 nh2 (f n, f 0 ) n ( c 2 (b a) 3 2f 0 (x 0 )(x 0 a)(b x 0 ) } 2n ) Statistical Seminar, Fréjus 2.22
24 We now choose c to maximize the quantity on the right side. It is easily seen that the maximum is achieved when c = c 0 1/ 2B, with Ac 0 exp( Bc 2 0 ) = Ac 0exp( 1/2) and c 0 = ( f 0 (x 0 ) (b a)3 (x 0 a)(b x 0 ) ) 1/2. lim inf n inf max { E n n Tn ν(f n ), E 0 n Tn ν(f 0 ) } T n e 1/2 f0 (x 0 ) b x0 4 b a x 0 a. Repeating this argument with the right-continuous version of the sequence {f n } yields a similar bound, but with the factor (b x 0 )/(x 0 a) replaced by (x 0 a)/(b x 0 ). Statistical Seminar, Fréjus 2.23
25 By taking the maximum of the two lower bounds yields the last display with the right side replaced by e 1/2 { } f0 (x 0 ) b x0 x0 4 b a max x 0 a, a b x 0 { e 1/2 f0 (x 0 ) b x0 4 b a x 0 a b x } 0 x0 b a + a x0 a. b x 0 b a This lower bound has the appropriate structure in the sense that the MLE of f, f n (x 0 ) converges at rate n 1/2 and the limiting behavior of the MLE has exactly the same dependence on f 0 (x 0 ), b a, x 0 a, and b x 0. Statistical Seminar, Fréjus 2.24
26 Theorem. (Carolan and Dykstra, 1999) If f 0 is decreasing with f 0 constant on (a, b), the maximal open interval containing x 0, then, with p f 0 (x 0 )(b a) = P 0 (a < X < b), n( f n (x 0 ) f 0 (x 0 )) d f0 (x 0 ) { 1 pz + S ( x0 a b a b a where Z N(0, 1) and S is the process of left-derivatives of the least concave majorant Û of a Brownian bridge process U independent of Z. Note that by using Groeneboom (1983) { f0 (x E 0 ) 1 ( )} x0 a pz + S b a b a f0 (x 0 ) ( ) S x0 b a E a b a { f0 (x = 0 ) 2 (b x0 b a 2 ) 3/2 π(b a) (x 0 a) 1/2 + (x 0 a) 3/2 } (b x 0 ) 1/2. Statistical Seminar, Fréjus 2.25 )}
27 S3: f 0 (x 0 ) > f 0 (x 0 +). In this case we consider estimation of the functional ν(f) = (f(x 0 +) + f(x 0 ))/2 f(x 0 ). To apply our two-point lower bound Proposition, consider the following choice of f n : for c > 0, define f n (x) = f 0 (x) if x x 0 or x > b n, f 0 (x 0 ) +(x x 0 ) f 0(b n ) f 0 (x 0 ) c/n if x 0 < x b n. where b n x 0 + c/n. Then define f n = f n / 0 In this case f n (y)dy. ν(f n ) ν(f 0 ) = f n (x 0 ) f 0 (x 0 ) = f n (x 0 ) 1 + o(1) f 0(x 0 +) + f 0 (x 0 ) 2 = 1 2 (f 0(x 0 ) f 0 (x 0 +)) + o(1) d + o(1). Statistical Seminar, Fréjus 2.26
28 Statistical Seminar, Fréjus 2.27
29 Some calculation shows that r 2 = { f 0 (x 0 ) H 2 (f n, f 0 ) = cr2 (1 + o(1)) n where f 0 (x 0 +)} 2 {3 f 0 (x 0 ) + f 0 (x 0 ) + f 0 (x 0 +) f 0 (x 0 +)}. Combining these pieces with the two-point lower bound yields inf max {E n T n ν(f n ), E 0 T n ν(f 0 ) } T n 1 4 ν(f n) ν(f 0 ) { 1 nh2 (f n, f 0 ) n = 1 8 (f 0(x 0 ) f 0 (x 0 +)) (1 + o(1)) } 2n { 1 cr2 (1 + o(1)) n } 2n d 4 exp ( cr 2) = d 4e by choosing c = 1/r 2. Statistical Seminar, Fréjus 2.28
30 This corresponds to the following theorem for the MLE f n : Theorem. (Anevski and Hössjer, 2002; W, 2007) If x 0 is a discontinuity point of f 0, d (f 0 (x 0 ) f 0 (x 0 +))/2 with f 0 (x 0 +) > 0 and f(x 0 ) (f 0 (x 0 ) + f 0 (x 0 ))/2, then f n (x 0 ) f 0 (x 0 ) d R(0) where h R(h) is the process of left-derivatives of the least concave majorant M of the process M defined by M(h) = N 0 (h) d h { N(f0 (x 0 +)h) f 0 (x 0 +)h dh, h 0 N(f 0 (x 0 )h) f 0 (x 0 )h + dh, h < 0 where N is a standard (rate 1) two-sided Poisson process on R. Statistical Seminar, Fréjus 2.29
31 Statistical Seminar, Fréjus 2.30
32 S4: f 0 (x 0 ) > 0, f (j) 0 (x 0) = 0, j = 1, 2,..., p 1, and f (p) 0 (x 0) 0. In this case, consider the perturbation f ɛ of f 0 given for ɛ > 0 by f ɛ (x) = Then for ν(f) = f(x 0 ) f 0 (x) if x x 0 ɛ or x > x 0 + ɛ, f 0 (x 0 ɛ) if x 0 ɛ < x x 0 f 0 (x 0 + ɛ) if x 0 < x x 0 + ɛ. where f (p) ν(f ɛ ) ν(f 0 ) 0 (x 0) ɛ p, p! H 2 f (p) (f ɛ, f 0 ) A 0 (x 0) 2 p ɛ 2p+1 B p ɛ 2p+1 f 0 (x 0 ) A p 2p 2 (2p!) 2 (2p 2 + 3p + 1). Statistical Seminar, Fréjus 2.31
33 Statistical Seminar, Fréjus 2.32
34 Choosing ɛ = cn 1/(2p+1), plugging into our two-point bound, and optimizing with respect to c yields inf max { n p/(2p+1) E n T n ν(f n ), n p/(2p+1) E 0 T n ν(f 0 ) } T n 1 4 ν(f n) ν(f 0 ) 1 4 f (p) 0 (x 0) p! { 1 nh2 (f n, f 0 ) n c p exp ( 2B p c 2p+1) } 2n = D p ( f (p) 0 (x 0) f 0 (x 0 ) p ) 1/(2p+1) taking c = with D p 1 4p! ( p p (2p + 1)A p p ( p (2p + 1)B p ) 1/(2p+1) exp( p/(2p + 1)). ) 1/(2p+1) Statistical Seminar, Fréjus 2.33
35 The resulting lower bound corresponds to the following theorem for f n : Theorem. (Wright (1981); Leurgans (1982); Anevski and Hössjer (2002)) Suppose that f (j) 0 (x 0) = 0 for j = 1,..., p 1, f (p) 0 (x 0) 0, and f (p) 0 is continuous at x 0. Then n p/(2p+1) ( f n (x 0 + n 1/(2p+1) t) f 0 (x 0 )) d C p S p (t) where S p is the process given by the left-derivatives of the least concave majorant Ŷp of Y p (t) W (t) t p+1, and where In particular C p = ( f 0 (x 0 ) p f (p) 0 (x 0) /(p + 1)!) 1/(2p+1). n p/(2p+1) ( f n (x 0 ) f 0 (x 0 )) d C p S p (0) Proof. Switching + (argmax-)continuous mapping theorem. Statistical Seminar, Fréjus 2.34
36 Summary: The MLE f n is locally adaptive to f 0, at least in scenarios 1-4. S1: rate n 1/3 ; localization n 1/3 ; constants agree with minimax lower bound. S2: rate n 1/2 ; localization n 0 = 1, none; constants agree with minimax bound. S3: rate n 0 = 1; localization n 1 ; constants agree(?). S4: rate n p/(2p+1) ; localization n 1/(2p+1) ; constants agree. Statistical Seminar, Fréjus 2.35
37 C: Global lower and upper bounds (briefly) Birgé (1986, 1989) expresses the global optimality of f n in terms of its L 1 risks as follows: Lower bound: Birgé (1987). Let F denote the class of all decreasing densities f on [0, 1] satisfying f M with M > 1. Then the minimax risk for F with respect to the L 1 metric d 1 (f, g) f(x) g(x) dx based on n observations is R M (d 1, n) inf sup E f d 1 ( ˆf n, f). ˆf n f F Then there is an absolute constant C such that ( ) logm 1/3 R M (d 1, n) C. n Upper bound, Grenander: Birgé (1989). Let f n denote the Grenander estimator of f F. Then ( ) logm 1/3 sup E f d 1 ( ˆf n, f) f F M n Statistical Seminar, Fréjus 2.36
38 C: Global lower and upper bounds (briefly) Birgé s bounds are complemented by the remarkable results of Groeneboom (1985), Groeneboom, Hooghiemstra, and Lopuhaa (1999). Set V (t) sup{s : W (s) (s t) 2 is maximal} where W is a standard two-sided Brownian motion process starting from 0. Theorem. (Groeneboom (1985), GHL (1999)) Suppose that f is a decreasing density on [0, 1] satisfying: A1. 0 < f(1) f(y) f(x) f(0) < for 0 x y 1. A2. 0 < inf 0<x<1 f (x) sup 0<x<1 f (x) <. A3. sup 0<x<1 f (x) <. Then, with µ = 2E V (0) f (x)f(x) 1/3 dx, n 1/6 { n 1/3 1 where σ 2 = f n (x) f(x) dx µ } Cov( V (0), V (t) t )dt. d σz N(0, σ 2 ) Statistical Seminar, Fréjus 2.37
39 D: Lower bounds: convex decreasing density Now consider estimation of a convex decreasing density f on [0, ). (Original motivation: Hampel s (1987) bird-migration problem.) Since f exists almost everywhere, we are now interested in in estimation of ν 1 (f) = f(x 0 ) and ν 2 (f) = f (x 0 ). We let D 2 denote the class of all convex decreasing densities on R +. Note that every f D 2 can be written as a scale mixture of the triangular (or Beta(1, 2)) density: if f D 2, then f(x) = 0 2y 1 (1 x/y) + dg(y) for some (mixing) distribution G on [0, ). This corresponds to the fact that monotone decreasing density f D D 1 can be written as a scale mixture of the Uniform(0, 1) (or Beta(1, 1)) density: if f D 1, then f(x) = 0 y 1 1 [0,y] (x)dg(y) for some distribution G on [0, ). Statistical Seminar, Fréjus 2.38
40 D: Lower bounds: convex decreasing density Scenario 1: Suppose that f 0 D 2 and x 0 (0, ) satisfy f 0 (x 0 ) > 0, f 0 (x 0) > 0, and f 0 is continuous at x 0. To establish lower bounds, consider the perturbations f ɛ of f 0 given by f ɛ (x) = f 0 (x 0 ɛc ɛ ) + (x x 0 + ɛc ɛ )f 0 (x 0 ɛc ɛ ), f 0 (x 0 + ɛ) + (x x 0 ɛ)f 0 (x 0 + ɛ), f 0 (x), x (x 0 ɛc ɛ, x 0 ɛ), x (x 0 ɛ, x 0 + ɛ), elsewhere; here c ɛ is chosen so that f ɛ is continuous at x 0 ɛ. Now define f ɛ by f ɛ (x) = f ɛ (x) + τ ɛ (x 0 ɛ x)1 [0,x0 ɛ] (x) with τ ɛ chosen so that f ɛ integrates to 1. Statistical Seminar, Fréjus 2.39
41 D: Lower bounds: convex decreasing density Statistical Seminar, Fréjus 2.40
42 D: Lower bounds: convex decreasing density Statistical Seminar, Fréjus 2.41
43 D: Lower bounds: convex decreasing density Now ν 1 (f ɛ ) ν 1 (f 0 ) = f ɛ (x 0 ) f 0 (x 0 ) 1 2 f (2) 0 (x 0)ɛ 2 (1 + o(1)), ν 2 (f ɛ ) ν 2 (f 0 ) = f ɛ(x 0 ) f 0 (x 0) f (2) 0 (x 0)ɛ(1 + o(1)), and some further computation (Jongbloed (1995), (2000)) shows that H 2 (f ɛ, f 0 ) = 2f (2) 0 (x 0) 2 5f 0 (x 0 ) ɛ5 (1 + o(1)). Thus taking ɛ ɛ n = cn 1/5, writing f n for f ɛn, and using our two-point lower bound proposition yields lim inf n inf max { E n n 2/5 T n ν 1 (f n ), E 0 n 2/5 T n ν 1 (f 0 ) } T n and f 0 2(x 0)f (2) 0 (x 0) e 2 1/5 Statistical Seminar, Fréjus 2.42,
44 D: Lower bounds: convex decreasing density lim inf n inf max { E n n 1/5 T n ν 2 (f n ), E 0 n 1/5 T n ν 2 (f 0 ) } T n 1 4 f 0(x 0 )f (2) 0 (x 0) 3 4e 1/5 We will see that the MLE achieves these rates and that the limiting distributions involve exactly these constants tomorrow. Other Scenarios? S2: f 0 triangular on [0, 1]? (Degenerate mixing distribution at 1.) S3: x 0 (a, b) where f 0 is linear on (a, b)? S4: x 0 a bend or kink point for x 0 : f 0 (x 0 ) < f 0 (x 0+)? S5:? Statistical Seminar, Fréjus 2.43.
STATISTICAL INFERENCE UNDER ORDER RESTRICTIONS LIMIT THEORY FOR THE GRENANDER ESTIMATOR UNDER ALTERNATIVE HYPOTHESES
STATISTICAL INFERENCE UNDER ORDER RESTRICTIONS LIMIT THEORY FOR THE GRENANDER ESTIMATOR UNDER ALTERNATIVE HYPOTHESES By Jon A. Wellner University of Washington 1. Limit theory for the Grenander estimator.
More informationNonparametric estimation under Shape Restrictions
Nonparametric estimation under Shape Restrictions Jon A. Wellner University of Washington, Seattle Statistical Seminar, Frejus, France August 30 - September 3, 2010 Outline: Five Lectures on Shape Restrictions
More informationNonparametric estimation of log-concave densities
Nonparametric estimation of log-concave densities Jon A. Wellner University of Washington, Seattle Seminaire, Institut de Mathématiques de Toulouse 5 March 2012 Seminaire, Toulouse Based on joint work
More informationNonparametric estimation of log-concave densities
Nonparametric estimation of log-concave densities Jon A. Wellner University of Washington, Seattle Northwestern University November 5, 2010 Conference on Shape Restrictions in Non- and Semi-Parametric
More informationShape constrained estimation: a brief introduction
Shape constrained estimation: a brief introduction Jon A. Wellner University of Washington, Seattle Cowles Foundation, Yale November 7, 2016 Econometrics lunch seminar, Yale Based on joint work with: Fadoua
More informationChernoff s distribution is log-concave. But why? (And why does it matter?)
Chernoff s distribution is log-concave But why? (And why does it matter?) Jon A. Wellner University of Washington, Seattle University of Michigan April 1, 2011 Woodroofe Seminar Based on joint work with:
More informationA Law of the Iterated Logarithm. for Grenander s Estimator
A Law of the Iterated Logarithm for Grenander s Estimator Jon A. Wellner University of Washington, Seattle Probability Seminar October 24, 2016 Based on joint work with: Lutz Dümbgen and Malcolm Wolff
More informationMaximum likelihood: counterexamples, examples, and open problems
Maximum likelihood: counterexamples, examples, and open problems Jon A. Wellner University of Washington visiting Vrije Universiteit, Amsterdam Talk at BeNeLuxFra Mathematics Meeting 21 May, 2005 Email:
More informationNonparametric estimation under shape constraints, part 2
Nonparametric estimation under shape constraints, part 2 Piet Groeneboom, Delft University August 7, 2013 What to expect? Theory and open problems for interval censoring, case 2. Same for the bivariate
More informationAsymptotic normality of the L k -error of the Grenander estimator
Asymptotic normality of the L k -error of the Grenander estimator Vladimir N. Kulikov Hendrik P. Lopuhaä 1.7.24 Abstract: We investigate the limit behavior of the L k -distance between a decreasing density
More informationMaximum likelihood: counterexamples, examples, and open problems. Jon A. Wellner. University of Washington. Maximum likelihood: p.
Maximum likelihood: counterexamples, examples, and open problems Jon A. Wellner University of Washington Maximum likelihood: p. 1/7 Talk at University of Idaho, Department of Mathematics, September 15,
More informationEconomics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011
Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Section 2.6 (cont.) Properties of Real Functions Here we first study properties of functions from R to R, making use of the additional structure
More informationd(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N
Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x
More informationEstimation of a discrete monotone distribution
Electronic Journal of Statistics Vol. 3 (009) 1567 1605 ISSN: 1935-754 DOI: 10.114/09-EJS56 Estimation of a discrete monotone distribution Hanna K. Jankowski Department of Mathematics and Statistics York
More informationA tailor made nonparametric density estimate
A tailor made nonparametric density estimate Daniel Carando 1, Ricardo Fraiman 2 and Pablo Groisman 1 1 Universidad de Buenos Aires 2 Universidad de San Andrés School and Workshop on Probability Theory
More informationIowa State University. Instructor: Alex Roitershtein Summer Homework #5. Solutions
Math 50 Iowa State University Introduction to Real Analysis Department of Mathematics Instructor: Alex Roitershtein Summer 205 Homework #5 Solutions. Let α and c be real numbers, c > 0, and f is defined
More informationLecture 7 Introduction to Statistical Decision Theory
Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7
More informationLeast squares under convex constraint
Stanford University Questions Let Z be an n-dimensional standard Gaussian random vector. Let µ be a point in R n and let Y = Z + µ. We are interested in estimating µ from the data vector Y, under the assumption
More informationMATH MEASURE THEORY AND FOURIER ANALYSIS. Contents
MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure
More informationStatistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation
Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider
More informationf(x) f(z) c x z > 0 1
INVERSE AND IMPLICIT FUNCTION THEOREMS I use df x for the linear transformation that is the differential of f at x.. INVERSE FUNCTION THEOREM Definition. Suppose S R n is open, a S, and f : S R n is a
More informationDefinition 6.1. A metric space (X, d) is complete if every Cauchy sequence tends to a limit in X.
Chapter 6 Completeness Lecture 18 Recall from Definition 2.22 that a Cauchy sequence in (X, d) is a sequence whose terms get closer and closer together, without any limit being specified. In the Euclidean
More informationTHE INVERSE FUNCTION THEOREM
THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)
More informationMath 5051 Measure Theory and Functional Analysis I Homework Assignment 3
Math 551 Measure Theory and Functional Analysis I Homework Assignment 3 Prof. Wickerhauser Due Monday, October 12th, 215 Please do Exercises 3*, 4, 5, 6, 8*, 11*, 17, 2, 21, 22, 27*. Exercises marked with
More informationNonparametric estimation: s concave and log-concave densities: alternatives to maximum likelihood
Nonparametric estimation: s concave and log-concave densities: alternatives to maximum likelihood Jon A. Wellner University of Washington, Seattle Statistics Seminar, York October 15, 2015 Statistics Seminar,
More informationLectures 22-23: Conditional Expectations
Lectures 22-23: Conditional Expectations 1.) Definitions Let X be an integrable random variable defined on a probability space (Ω, F 0, P ) and let F be a sub-σ-algebra of F 0. Then the conditional expectation
More informationLecture 35: December The fundamental statistical distances
36-705: Intermediate Statistics Fall 207 Lecturer: Siva Balakrishnan Lecture 35: December 4 Today we will discuss distances and metrics between distributions that are useful in statistics. I will be lose
More information1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),
Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer
More informationNonparametric estimation of. s concave and log-concave densities: alternatives to maximum likelihood
Nonparametric estimation of s concave and log-concave densities: alternatives to maximum likelihood Jon A. Wellner University of Washington, Seattle Cowles Foundation Seminar, Yale University November
More informationLecture 8: Information Theory and Statistics
Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang
More informationL p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by
L p Functions Given a measure space (, µ) and a real number p [, ), recall that the L p -norm of a measurable function f : R is defined by f p = ( ) /p f p dµ Note that the L p -norm of a function f may
More informationMAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9
MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended
More informationwith Current Status Data
Estimation and Testing with Current Status Data Jon A. Wellner University of Washington Estimation and Testing p. 1/4 joint work with Moulinath Banerjee, University of Michigan Talk at Université Paul
More informationMATH 217A HOMEWORK. P (A i A j ). First, the basis case. We make a union disjoint as follows: P (A B) = P (A) + P (A c B)
MATH 217A HOMEWOK EIN PEASE 1. (Chap. 1, Problem 2. (a Let (, Σ, P be a probability space and {A i, 1 i n} Σ, n 2. Prove that P A i n P (A i P (A i A j + P (A i A j A k... + ( 1 n 1 P A i n P (A i P (A
More informationChapter 6. Integration. 1. Integrals of Nonnegative Functions. a j µ(e j ) (ca j )µ(e j ) = c X. and ψ =
Chapter 6. Integration 1. Integrals of Nonnegative Functions Let (, S, µ) be a measure space. We denote by L + the set of all measurable functions from to [0, ]. Let φ be a simple function in L +. Suppose
More informationExpectation Maximization (EM) Algorithm. Each has it s own probability of seeing H on any one flip. Let. p 1 = P ( H on Coin 1 )
Expectation Maximization (EM Algorithm Motivating Example: Have two coins: Coin 1 and Coin 2 Each has it s own probability of seeing H on any one flip. Let p 1 = P ( H on Coin 1 p 2 = P ( H on Coin 2 Select
More informationBi-s -Concave Distributions
Bi-s -Concave Distributions Jon A. Wellner (Seattle) Non- and Semiparametric Statistics In Honor of Arnold Janssen, On the occasion of his 65th Birthday Non- and Semiparametric Statistics: In Honor of
More informationGeneralized continuous isotonic regression
Generalized continuous isotonic regression Piet Groeneboom, Geurt Jongbloed To cite this version: Piet Groeneboom, Geurt Jongbloed. Generalized continuous isotonic regression. Statistics and Probability
More information0.1 Uniform integrability
Copyright c 2009 by Karl Sigman 0.1 Uniform integrability Given a sequence of rvs {X n } for which it is known apriori that X n X, n, wp1. for some r.v. X, it is of great importance in many applications
More informationconverges as well if x < 1. 1 x n x n 1 1 = 2 a nx n
Solve the following 6 problems. 1. Prove that if series n=1 a nx n converges for all x such that x < 1, then the series n=1 a n xn 1 x converges as well if x < 1. n For x < 1, x n 0 as n, so there exists
More informationMinimax lower bounds I
Minimax lower bounds I Kyoung Hee Kim Sungshin University 1 Preliminaries 2 General strategy 3 Le Cam, 1973 4 Assouad, 1983 5 Appendix Setting Family of probability measures {P θ : θ Θ} on a sigma field
More informationTHEOREMS, ETC., FOR MATH 515
THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every
More informationReal Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis
Real Analysis, 2nd Edition, G.B.Folland Chapter 5 Elements of Functional Analysis Yung-Hsiang Huang 5.1 Normed Vector Spaces 1. Note for any x, y X and a, b K, x+y x + y and by ax b y x + b a x. 2. It
More informationContinuity. Matt Rosenzweig
Continuity Matt Rosenzweig Contents 1 Continuity 1 1.1 Rudin Chapter 4 Exercises........................................ 1 1.1.1 Exercise 1............................................. 1 1.1.2 Exercise
More informationProblem List MATH 5143 Fall, 2013
Problem List MATH 5143 Fall, 2013 On any problem you may use the result of any previous problem (even if you were not able to do it) and any information given in class up to the moment the problem was
More informationGeneralized Orlicz spaces and Wasserstein distances for convex concave scale functions
Bull. Sci. math. 135 (2011 795 802 www.elsevier.com/locate/bulsci Generalized Orlicz spaces and Wasserstein distances for convex concave scale functions Karl-Theodor Sturm Institut für Angewandte Mathematik,
More informationEstimation of a Convex Function: Characterizations and Asymptotic Theory
Estimation of a Convex Function: Characterizations and Asymptotic Theory Piet Groeneboom, Geurt Jongbloed, and Jon Wellner Delft University of Technology, Vrije Universiteit Amsterdam, and University of
More informationLARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011
LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic
More information4 Invariant Statistical Decision Problems
4 Invariant Statistical Decision Problems 4.1 Invariant decision problems Let G be a group of measurable transformations from the sample space X into itself. The group operation is composition. Note that
More information(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define
Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that
More information6.1 Variational representation of f-divergences
ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 6: Variational representation, HCR and CR lower bounds Lecturer: Yihong Wu Scribe: Georgios Rovatsos, Feb 11, 2016
More informationIntegration on Measure Spaces
Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of
More informationis a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.
Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable
More informationReal Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi
Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.
More informationEconomics 204. The Transversality Theorem is a particularly convenient formulation of Sard s Theorem for our purposes: with r 1+max{0,n m}
Economics 204 Lecture 13 Wednesday, August 12, 2009 Section 5.5 (Cont.) Transversality Theorem The Transversality Theorem is a particularly convenient formulation of Sard s Theorem for our purposes: Theorem
More informationMATH 6337 Second Midterm April 1, 2014
You can use your book and notes. No laptop or wireless devices allowed. Write clearly and try to make your arguments as linear and simple as possible. The complete solution of one exercise will be considered
More informationRandom fraction of a biased sample
Random fraction of a biased sample old models and a new one Statistics Seminar Salatiga Geurt Jongbloed TU Delft & EURANDOM work in progress with Kimberly McGarrity and Jilt Sietsma September 2, 212 1
More informationMath 5210, Definitions and Theorems on Metric Spaces
Math 5210, Definitions and Theorems on Metric Spaces Let (X, d) be a metric space. We will use the following definitions (see Rudin, chap 2, particularly 2.18) 1. Let p X and r R, r > 0, The ball of radius
More informationSet, functions and Euclidean space. Seungjin Han
Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,
More informationMATH 202B - Problem Set 5
MATH 202B - Problem Set 5 Walid Krichene (23265217) March 6, 2013 (5.1) Show that there exists a continuous function F : [0, 1] R which is monotonic on no interval of positive length. proof We know there
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationBrownian Motion and Stochastic Calculus
ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s
More informationThe International Journal of Biostatistics
The International Journal of Biostatistics Volume 1, Issue 1 2005 Article 3 Score Statistics for Current Status Data: Comparisons with Likelihood Ratio and Wald Statistics Moulinath Banerjee Jon A. Wellner
More informationImmerse Metric Space Homework
Immerse Metric Space Homework (Exercises -2). In R n, define d(x, y) = x y +... + x n y n. Show that d is a metric that induces the usual topology. Sketch the basis elements when n = 2. Solution: Steps
More informationh(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote
Real Variables, Fall 4 Problem set 4 Solution suggestions Exercise. Let f be of bounded variation on [a, b]. Show that for each c (a, b), lim x c f(x) and lim x c f(x) exist. Prove that a monotone function
More informationarxiv:math/ v2 [math.st] 17 Jun 2008
The Annals of Statistics 2008, Vol. 36, No. 3, 1031 1063 DOI: 10.1214/009053607000000974 c Institute of Mathematical Statistics, 2008 arxiv:math/0609020v2 [math.st] 17 Jun 2008 CURRENT STATUS DATA WITH
More informationReview of Multi-Calculus (Study Guide for Spivak s CHAPTER ONE TO THREE)
Review of Multi-Calculus (Study Guide for Spivak s CHPTER ONE TO THREE) This material is for June 9 to 16 (Monday to Monday) Chapter I: Functions on R n Dot product and norm for vectors in R n : Let X
More information6.2 Fubini s Theorem. (µ ν)(c) = f C (x) dµ(x). (6.2) Proof. Note that (X Y, A B, µ ν) must be σ-finite as well, so that.
6.2 Fubini s Theorem Theorem 6.2.1. (Fubini s theorem - first form) Let (, A, µ) and (, B, ν) be complete σ-finite measure spaces. Let C = A B. Then for each µ ν- measurable set C C the section x C is
More informationCHAPTER 3: LARGE SAMPLE THEORY
CHAPTER 3 LARGE SAMPLE THEORY 1 CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 2 Introduction CHAPTER 3 LARGE SAMPLE THEORY 3 Why large sample theory studying small sample property is usually
More informationNonparametric estimation under shape constraints, part 1
Nonparametric estimation under shape constraints, part 1 Piet Groeneboom, Delft University August 6, 2013 What to expect? History of the subject Distribution of order restricted monotone estimates. Connection
More informationlim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),
1 Part I Exercise 1.1. Let C n denote the number of self-avoiding random walks starting at the origin in Z of length n. 1. Show that (Hint: Use C n+m C n C m.) lim n C1/n n = inf n C1/n n := ρ.. Show that
More informationDIFFERENTIAL CRITERIA FOR POSITIVE DEFINITENESS
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume, Number, Pages S -9939(XX)- DIFFERENTIAL CRITERIA FOR POSITIVE DEFINITENESS J. A. PALMER Abstract. We show how the Mellin transform can be used to
More informationAnalysis II - few selective results
Analysis II - few selective results Michael Ruzhansky December 15, 2008 1 Analysis on the real line 1.1 Chapter: Functions continuous on a closed interval 1.1.1 Intermediate Value Theorem (IVT) Theorem
More informationEconomics 204 Fall 2011 Problem Set 2 Suggested Solutions
Economics 24 Fall 211 Problem Set 2 Suggested Solutions 1. Determine whether the following sets are open, closed, both or neither under the topology induced by the usual metric. (Hint: think about limit
More information1 Solution to Problem 2.1
Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto
More informationTopology, Math 581, Fall 2017 last updated: November 24, Topology 1, Math 581, Fall 2017: Notes and homework Krzysztof Chris Ciesielski
Topology, Math 581, Fall 2017 last updated: November 24, 2017 1 Topology 1, Math 581, Fall 2017: Notes and homework Krzysztof Chris Ciesielski Class of August 17: Course and syllabus overview. Topology
More informationProblem set 5, Real Analysis I, Spring, otherwise. (a) Verify that f is integrable. Solution: Compute since f is even, 1 x (log 1/ x ) 2 dx 1
Problem set 5, Real Analysis I, Spring, 25. (5) Consider the function on R defined by f(x) { x (log / x ) 2 if x /2, otherwise. (a) Verify that f is integrable. Solution: Compute since f is even, R f /2
More informationREAL VARIABLES: PROBLEM SET 1. = x limsup E k
REAL VARIABLES: PROBLEM SET 1 BEN ELDER 1. Problem 1.1a First let s prove that limsup E k consists of those points which belong to infinitely many E k. From equation 1.1: limsup E k = E k For limsup E
More informationg 2 (x) (1/3)M 1 = (1/3)(2/3)M.
COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is
More informationNotes on the Lebesgue Integral by Francis J. Narcowich November, 2013
Notes on the Lebesgue Integral by Francis J. Narcowich November, 203 Introduction In the definition of the Riemann integral of a function f(x), the x-axis is partitioned and the integral is defined in
More informationIntegral Jensen inequality
Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a
More information40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology
Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson
More informationOn Efficiency of the Plug-in Principle for Estimating Smooth Integrated Functionals of a Nonincreasing Density
On Efficiency of the Plug-in Principle for Estimating Smooth Integrated Functionals of a Nonincreasing Density arxiv:188.7915v2 [math.st] 12 Feb 219 Rajarshi Mukherjee and Bodhisattva Sen Department of
More informationPart II Probability and Measure
Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationHomework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018
Homework Assignment #2 for Prob-Stats, Fall 2018 Due date: Monday, October 22, 2018 Topics: consistent estimators; sub-σ-fields and partial observations; Doob s theorem about sub-σ-field measurability;
More information+ 2x sin x. f(b i ) f(a i ) < ɛ. i=1. i=1
Appendix To understand weak derivatives and distributional derivatives in the simplest context of functions of a single variable, we describe without proof some results from real analysis (see [7] and
More informationElements of Probability Theory
Elements of Probability Theory CHUNG-MING KUAN Department of Finance National Taiwan University December 5, 2009 C.-M. Kuan (National Taiwan Univ.) Elements of Probability Theory December 5, 2009 1 / 58
More informationhere, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional
15. Perturbations by compact operators In this chapter, we study the stability (or lack thereof) of various spectral properties under small perturbations. Here s the type of situation we have in mind:
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationEstimation of a k monotone density, part 3: limiting Gaussian versions of the problem; invelopes and envelopes
Estimation of a monotone density, part 3: limiting Gaussian versions of the problem; invelopes and envelopes Fadoua Balabdaoui 1 and Jon A. Wellner University of Washington September, 004 Abstract Let
More informationx y More precisely, this equation means that given any ε > 0, there exists some δ > 0 such that
Chapter 2 Limits and continuity 21 The definition of a it Definition 21 (ε-δ definition) Let f be a function and y R a fixed number Take x to be a point which approaches y without being equal to y If there
More informationDRAFT - Math 101 Lecture Note - Dr. Said Algarni
2 Limits 2.1 The Tangent Problems The word tangent is derived from the Latin word tangens, which means touching. A tangent line to a curve is a line that touches the curve and a secant line is a line that
More informationReal Analysis Notes. Thomas Goller
Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationEcon 508B: Lecture 5
Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline
More informationAnalysis Finite and Infinite Sets The Real Numbers The Cantor Set
Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationTHE DVORETZKY KIEFER WOLFOWITZ INEQUALITY WITH SHARP CONSTANT: MASSART S 1990 PROOF SEMINAR, SEPT. 28, R. M. Dudley
THE DVORETZKY KIEFER WOLFOWITZ INEQUALITY WITH SHARP CONSTANT: MASSART S 1990 PROOF SEMINAR, SEPT. 28, 2011 R. M. Dudley 1 A. Dvoretzky, J. Kiefer, and J. Wolfowitz 1956 proved the Dvoretzky Kiefer Wolfowitz
More informationMaximum Likelihood Estimation under Shape Constraints
Maximum Likelihood Estimation under Shape Constraints Hanna K. Jankowski June 2 3, 29 Contents 1 Introduction 2 2 The MLE of a decreasing density 3 2.1 Finding the Estimator..............................
More informationOn continuous time contract theory
Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem
More information