with Current Status Data

Size: px
Start display at page:

Download "with Current Status Data"

Transcription

1 Estimation and Testing with Current Status Data Jon A. Wellner University of Washington Estimation and Testing p. 1/4

2 joint work with Moulinath Banerjee, University of Michigan Talk at Université Paul Sabatier, Toulouse III, Laboratoire de Statistique et Probabilités, http: // Estimation and Testing p. 2/4

3 Outline Introduction: current status data Estimation and Testing p. 3/4

4 Outline Introduction: current status data Estimation of F, unconstrained (old, Ayer et al. 1956) Estimation and Testing p. 3/4

5 Outline Introduction: current status data Estimation of F, unconstrained (old, Ayer et al. 1956) Estimation of F, constrained (new) Estimation and Testing p. 3/4

6 Outline Introduction: current status data Estimation of F, unconstrained (old, Ayer et al. 1956) Estimation of F, constrained (new) The likelihood ratio test of H : F (t 0 )=θ 0 Estimation and Testing p. 3/4

7 Outline Introduction: current status data Estimation of F, unconstrained (old, Ayer et al. 1956) Estimation of F, constrained (new) The likelihood ratio test of H : F (t 0 )=θ 0 How big is too big? The limit Gaussian problem Estimation and Testing p. 3/4

8 Outline Introduction: current status data Estimation of F, unconstrained (old, Ayer et al. 1956) Estimation of F, constrained (new) The likelihood ratio test of H : F (t 0 )=θ 0 How big is too big? The limit Gaussian problem Limiting Distribution of the likelihood ratio statistic under H Estimation and Testing p. 3/4

9 Outline Introduction: current status data Estimation of F, unconstrained (old, Ayer et al. 1956) Estimation of F, constrained (new) The likelihood ratio test of H : F (t 0 )=θ 0 How big is too big? The limit Gaussian problem Limiting Distribution of the likelihood ratio statistic under H Confidence intervals for F (t 0 ) Estimation and Testing p. 3/4

10 Outline Introduction: current status data Estimation of F, unconstrained (old, Ayer et al. 1956) Estimation of F, constrained (new) The likelihood ratio test of H : F (t 0 )=θ 0 How big is too big? The limit Gaussian problem Limiting Distribution of the likelihood ratio statistic under H Confidence intervals for F (t 0 ) Further problems Estimation and Testing p. 3/4

11 1. Introduction: current status data X F, Y G, X, Y independent Estimation and Testing p. 4/4

12 1. Introduction: current status data X F, Y G, X, Y independent We observe (Y,1{X Y }) (Y, ) with density p F (y, δ) =F (y) δ (1 F (y)) 1 δ g(y) Estimation and Testing p. 4/4

13 1. Introduction: current status data X F, Y G, X, Y independent We observe (Y,1{X Y }) (Y, ) with density p F (y, δ) =F (y) δ (1 F (y)) 1 δ g(y) Suppose that (Y i, i ) are i.i.d. as (Y, ). Estimation and Testing p. 4/4

14 1. Introduction: current status data X F, Y G, X, Y independent We observe (Y,1{X Y }) (Y, ) with density p F (y, δ) =F (y) δ (1 F (y)) 1 δ g(y) Suppose that (Y i, i ) are i.i.d. as (Y, ). Likelihood: L n (F )= n F (Y i ) i (1 F (Y i )) 1 i i=1 Estimation and Testing p. 4/4

15 Likelihood ratio test of H : F (t 0 )=θ 0 Estimation and Testing p. 5/4

16 Likelihood ratio test of H : F (t 0 )=θ 0 The likelihood ratio statistic: λ n = sup F L n (F ) sup F :F (t0 )=θ 0 L n (F ) = L n( ˆF n ) L n ( ˆF 0 n). Estimation and Testing p. 5/4

17 2. Estimation of F : Nonparametric MLE MLE (Unconstrained) ˆFn (t) =argmax F L n (F ) Estimation and Testing p. 6/4

18 2. Estimation of F : Nonparametric MLE MLE (Unconstrained) ˆFn (t) =argmax F L n (F ) Another description: define Note that G n (t) = 1 n n 1 [Yi t], V n (t) = 1 n i=1 n i 1 [Yi t]. i=1 G n (t) a.s. G(t), V n (t) a.s. t 0 F (y)dg(y) V (t). Estimation and Testing p. 6/4

19 2. Estimation of F : Nonparametric MLE MLE (Unconstrained) ˆFn (t) =argmax F L n (F ) Another description: define Note that G n (t) = 1 n n 1 [Yi t], V n (t) = 1 n i=1 n i 1 [Yi t]. i=1 G n (t) a.s. G(t), V n (t) a.s. t 0 F (y)dg(y) V (t). Thus dv (t) =F (t) dg Estimation and Testing p. 6/4

20 Partial sum diagram: Let Y (1) Y (2) Y (n). The partial sum diagram P = {P i } is given by P i =(G n (Y (i) ), V n (Y (i) )), i =1,...,n. Estimation and Testing p. 7/4

21 Partial sum diagram: Let Y (1) Y (2) Y (n). The partial sum diagram P = {P i } is given by P i =(G n (Y (i) ), V n (Y (i) )), i =1,...,n. The Nonparametric MLE ˆF n of F is: ˆF n (Y (i) )=left derivative of the Greatest Convex Minorant of P at Y (i). Estimation and Testing p. 7/4

22 Partial sum diagram: Let Y (1) Y (2) Y (n). The partial sum diagram P = {P i } is given by P i =(G n (Y (i) ), V n (Y (i) )), i =1,...,n. The Nonparametric MLE ˆF n of F is: ˆF n (Y (i) )=left derivative of the Greatest Greatest Convex Minorant = GCM Convex Minorant of P at Y (i). Estimation and Testing p. 7/4

23 Cumulative Sum Diagram: Example n =5 i (i) Y (i) nv n (Y (i) ) i Estimation and Testing p. 8/4

24 Cumulative Sum Diagram: Example n =5 i (i) Y (i) nv n (Y (i) ) i Estimation and Testing p. 8/4

25 Cumulative Sum Diagram: Example n =5 i (i) Y (i) nv n (Y (i) ) i Estimation and Testing p. 8/4

26 Example continued, n =5 i (i) Y (i) ˆF n 1 2 y Estimation and Testing p. 9/4

27 Example continued, n =5 i (i) Y (i) ˆF n 1 2 y Estimation and Testing p. 9/4

28 Example continued, n =5 i (i) Y (i) ˆF n 1 2 y Estimation and Testing p. 9/4

29 cumvec index Estimation and Testing p. 10/4

30 yaxis xaxis Estimation and Testing p. 11/4

31 3. The constrained MLE ˆF 0 n. Recipe: Break P into P L and P R where P L = {P i : Y (i) t 0 }, P R = {P i : Y (i) >t 0 }. Estimation and Testing p. 12/4

32 3. The constrained MLE ˆF n 0. Recipe: Break P into P L and P R where P L = {P i : Y (i) t 0 }, P R = {P i : Y (i) >t 0 }. Form the GCM s of P L and P R,sayṼL n and ṼR n. Estimation and Testing p. 12/4

33 3. The constrained MLE ˆF n 0. Recipe: Break P into P L and P R where P L = {P i : Y (i) t 0 }, P R = {P i : Y (i) >t 0 }. Form the GCM s of P L and P R,sayṼL n and ṼR n. If the slope of ṼL n exceeds θ 0, replace it by θ 0 ; if the slope of Ṽ R n drops below θ 0, replace it by θ 0. Estimation and Testing p. 12/4

34 3. The constrained MLE ˆF n 0. Recipe: Break P into P L and P R where P L = {P i : Y (i) t 0 }, P R = {P i : Y (i) >t 0 }. Form the GCM s of P L and P R,sayṼL n and ṼR n. If the slope of ṼL n exceeds θ 0, replace it by θ 0 ; if the slope of Ṽ R n drops below θ 0, replace it by θ 0. The resulting (truncated or constrained) slope process yields the constrained MLE ˆF n. 0 Estimation and Testing p. 12/4

35 cumvec cusum diag. Unconstrained Constrained Index Estimation and Testing p. 13/4

36 yaxis xaxis Estimation and Testing p. 14/4

37 4. The likelihood ratio test of H : F (t 0 )=θ 0 Likelihood ratio statistic: λ n = sup F L n (F ) sup F :F (t0 )=θ 0 L n (F ) = L n( ˆF n ) L n ( ˆF 0 n). Estimation and Testing p. 15/4

38 4. The likelihood ratio test of H : F (t 0 )=θ 0 Likelihood ratio statistic: λ n = sup F L n (F ) sup F :F (t0 )=θ 0 L n (F ) = L n( ˆF n ) L n ( ˆF 0 n). How big is too big? Estimation and Testing p. 15/4

39 4. The likelihood ratio test of H : F (t 0 )=θ 0 Likelihood ratio statistic: λ n = sup F L n (F ) sup F :F (t0 )=θ 0 L n (F ) = L n( ˆF n ) L n ( ˆF 0 n). How big is too big? When H : F (t 0 )=θ 0 holds, does 2logλ n d something? Estimation and Testing p. 15/4

40 4. The likelihood ratio test of H : F (t 0 )=θ 0 Likelihood ratio statistic: λ n = sup F L n (F ) sup F :F (t0 )=θ 0 L n (F ) = L n( ˆF n ) L n ( ˆF 0 n). How big is too big? When H : F (t 0 )=θ 0 holds, does 2logλ n d something? Answer: Yes! Banerjee and Wellner (2001) Estimation and Testing p. 15/4

41 5. How big is too big? The limiting Gaussian problem Suppose that we observe {X(t) :t R} where X(t) =F (t)+σw(t) F (t) = t f(s)ds, f monotone non-decreasing, and W is standard (two-sided) Brownian motion. Estimation and Testing p. 16/4

42 5. How big is too big? The limiting Gaussian problem Suppose that we observe {X(t) :t R} where X(t) =F (t)+σw(t) F (t) = t f(s)ds, f monotone non-decreasing, and W is standard (two-sided) Brownian motion. Suppose that we want to estimate the monotone function f. Equivalently dx(t) =f(t)dt + σdw(t). Estimation and Testing p. 16/4

43 5. How big is too big? The limiting Gaussian problem Suppose that we observe {X(t) :t R} where X(t) =F (t)+σw(t) F (t) = t f(s)ds, f monotone non-decreasing, and W is standard (two-sided) Brownian motion. Suppose that we want to estimate the monotone function f. Equivalently dx(t) =f(t)dt + σdw(t). The canonical monotone function is a linear one, and we can change σ to 1 by virtue of scaling arguments so the canonical version of the problem is as follows: dx(t) =2tdt + dw (t), Estimation and Testing p. 16/4

44 estimate 2t when {X(t) : t R}, is observed. Thus X(t) =t 2 + W (t). Unconstrained Estimator : Slope of GCM of X(t). Call this process of slopes of the GCM S. Estimation and Testing p. 17/4

45 estimate 2t when {X(t) : t R}, is observed. Thus X(t) =t 2 + W (t). Unconstrained Estimator : Slope of GCM of X(t). Call this process of slopes of the GCM S. Estimation and Testing p. 17/4

46 W(t) + t^ samplepath unconstr t Estimation and Testing p. 18/4

47 2t g_{1,1} g_{1,1}^0 2t t Estimation and Testing p. 19/4

48 What is the canonical constrained problem? Estimation and Testing p. 20/4

49 What is the canonical constrained problem? Estimate the monontone function f(t) =2t subject to the constraint that f(0) = 0 when {X(t) : t R} is observed. Estimation and Testing p. 20/4

50 What is the constrained estimator? Estimation and Testing p. 21/4

51 What is the constrained estimator? Recipe: Break {X(t) :t R} into X R {X(t) :t 0}. X L {X(t) :t<0} and Form the GCM s of X L and X R say Y L and Y R. If the slope of Y L exceeds 0, replace it by 0; if the slope of Y R drops below 0, replace it by 0. The resulting (truncated or constrained) slope process S 0 is the constrained MLE of f(t) =2t in the Gaussian problem. Estimation and Testing p. 21/4

52 t Estimation and Testing p. 22/4

53 Unconstrained. One-sided Constrained t Estimation and Testing p. 23/4

54 2t g_{1,1} g_{1,1}^0 2t t Estimation and Testing p. 24/4

55 Likelihood ratio test statistic in the Gaussian problem? Suppose {X(t) :t [ c, c]} is given by dx(t) =f(t)dt + dw (t) so X(t) =F (t)+w (t). Estimation and Testing p. 25/4

56 Likelihood ratio test statistic in the Gaussian problem? Suppose {X(t) :t [ c, c]} is given by so dx(t) =f(t)dt + dw (t) X(t) =F (t)+w (t). Radon-Nikodym derivative (drifted process relative to zero drift): ( dp c f =exp fdx 1 c ) f 2 (t)dt. dp 0 c 2 c Estimation and Testing p. 25/4

57 Likelihood ratio test statistic in the Gaussian problem? Suppose {X(t) :t [ c, c]} is given by so dx(t) =f(t)dt + dw (t) X(t) =F (t)+w (t). Radon-Nikodym derivative (drifted process relative to zero drift): ( dp c f =exp fdx 1 c ) f 2 (t)dt. dp 0 c 2 c F(c, K) ={monotone functions f :[ c, c] R, f c K} F 0 (c, K) ={f F(c, K) : f(0) = 0} Estimation and Testing p. 25/4

58 Then 2logλ c = 2log { c = 2 = 2 c ( supf F(c,K) dp f /dp 0 c c sup f F0 (c,k) dp f /dp 0 ˆf c dx 1 2 c c c c ˆf 2 c (t)dt ˆf c,0 dx ( ˆf c ˆf c,0 )dx c c ) c c =2log } ˆf c,0(t)dt 2 ( dp ˆf /dp 0 { ˆf 2 c (t) ˆf 2 c,0(t)}dt. dp ˆf0 /dp 0 ) Estimation and Testing p. 26/4

59 Taking the limit as c with K = K c =5c, this yields 2logλ =2 D ( ˆf ˆf 0 )dx D { ˆf 2 (t) ˆf 2 0 (t)}dt Estimation and Testing p. 27/4

60 Taking the limit as c with K = K c =5c, this yields 2logλ =2 D ( ˆf ˆf 0 )dx D { ˆf 2 (t) ˆf 2 0 (t)}dt From the characterizations of ˆf and ˆf 0 : R (X ˆF )d ˆf =0, R (X ˆF 0 )d ˆf 0 =0. Estimation and Testing p. 27/4

61 Integration by parts: ( ˆf ˆf 0 )dx = ( ˆf ˆf 0 )dx = R D = ˆFdˆf + ˆF 0 d ˆf 0 D D = ˆfdˆF ˆf 0 d ˆF 0 D D = { ˆf 2 (t) ˆf 0 2 (t)}dt. D D Xd( ˆf ˆf 0 ) Estimation and Testing p. 28/4

62 Integration by parts: ( ˆf ˆf 0 )dx = ( ˆf ˆf 0 )dx = R D = ˆFdˆf + ˆF 0 d ˆf 0 D D = ˆfdˆF ˆf 0 d ˆF 0 D D = { ˆf 2 (t) ˆf 0 2 (t)}dt. D D Xd( ˆf ˆf 0 ) Likelihood ratio statistic becomes: 2logλ = { ˆf 2 (t) ˆf 0 2 (t)}dt. D Estimation and Testing p. 28/4

63 6. Limit distribution, LR statistic under H Limit distributions for ˆF n and ˆF 0 n. Set G loc n (t, h) =n 1/3 (G n (t + n 1/3 h) G n (t)) V loc n (t, h) = n 1/3 { n 1/3 (V n (t + n 1/3 h) V n (t)) G loc n (t, h)f (t) }. Estimation and Testing p. 29/4

64 6. Limit distribution, LR statistic under H Limit distributions for ˆF n and ˆF 0 n. Set G loc n (t, h) =n 1/3 (G n (t + n 1/3 h) G n (t)) V loc n (t, h) = n 1/3 { n 1/3 (V n (t + n 1/3 h) V n (t)) G loc n (t, h)f (t) }. Theorem 1. If g(t0 )=G (t 0 ) and f(t 0 )=F (t 0 ) exist, then: A. G loc n (t 0,h) p g(t 0 )h. B. V loc n (t 0,h) aw (h)+bh 2 where a = F (t 0 )(1 F (t 0 ))g(t 0 ), b = f(t 0 )g(t 0 )/2, and W is a two-sided Brownian motion starting from 0. Estimation and Testing p. 29/4

65 Now define Z n (h) =n 1/3 ( ˆF n (t 0 + hn 1/3 ) F (t 0 )), Z 0 n(h) =n 1/3 ( ˆF 0 n(t 0 + hn 1/3 ) F (t 0 )). Estimation and Testing p. 30/4

66 Now define Z n (h) =n 1/3 ( ˆF n (t 0 + hn 1/3 ) F (t 0 )), Z 0 n(h) =n 1/3 ( ˆF 0 n(t 0 + hn 1/3 ) F (t 0 )). Theorem 2. If the hypotheses of Theorem 1 hold with f(t 0 ) > 0, g(t 0 ) > 0, and F (t 0 )=θ 0, then (Z n (h), Z 0 n(h)) (S a,b (h), S 0 a,b (h))/g(t 0) where S a,b and S 0 a,b are the constrained and unconstrained slope processes corresponding to X a,b (h) =aw (h)+bh 2. Estimation and Testing p. 30/4

67 Limit distribution for 2logλ n Estimation and Testing p. 31/4

68 Limit distribution for 2logλ n Theorem 3. (Banerjee and Wellner, 2001). Suppose that F and G have densities f and g which are strictly positive and continuous in a neighborhood in a neighborhood of t 0. Suppose that F (t 0 )=θ 0. Then 2logλ n d 1 g(t 0 )a 2 ((S a,b (z)) 2 (S 0 a,b (z))2 ) dz d = {(S(z)) 2 (S 0 (z)) 2 }dz D, and the distribution of D is universal (free of parameters). Estimation and Testing p. 31/4

69 F(x) Brownian Exponential x Estimation and Testing p. 32/4

70 7. Confidence intervals for F (t 0 ) Wald-type intervals: Z n (0) = n 1/3 ( ˆF n (t 0 ) F (t 0 )) d S a,b (0)/g(t 0 ) d = { } F (t0 )(1 F (t 0 ))f(t 0 ) 1/3 S(0) 2g(t 0 ) C(F, f, g) S(0) where S(0) d =2Z 2argmin(W (h)+h 2 ), S(0) S 1,1 (0). Estimation and Testing p. 33/4

71 7. Confidence intervals for F (t 0 ) Wald-type intervals: Z n (0) = n 1/3 ( ˆF n (t 0 ) F (t 0 )) d S a,b (0)/g(t 0 ) d = { } F (t0 )(1 F (t 0 ))f(t 0 ) 1/3 S(0) 2g(t 0 ) C(F, f, g) S(0) where S(0) d =2Z 2argmin(W (h)+h 2 ), S(0) S 1,1 (0). Wald - interval: ˆF n (t 0 ) ± n 1/3 C( ˆF n, ˆf n, ĝ n ) t α where ˆf n and ĝ n are estimates of f and g (at t 0 ), and t α/2 satisfies P (2Z >t α/2 )=α/2. Estimation and Testing p. 33/4

72 Problem: this involves smoothing to get estimators ˆf n and ĝ n! Estimation and Testing p. 34/4

73 Confidence intervals from the LR test Estimation and Testing p. 35/4

74 Confidence intervals from the LR test Invert the test: {θ :2logλ n (θ) d α }. where P (D d α )=1 α Estimation and Testing p. 35/4

75 Confidence intervals from the LR test Invert the test: {θ :2logλ n (θ) d α }. where P (D d α )=1 α Advantage: no smoothing needed! Estimation and Testing p. 35/4

76 Confidence intervals from the LR test Invert the test: {θ :2logλ n (θ) d α }. where P (D d α )=1 α Advantage: no smoothing needed! Tradeoff: need to compute constrained estimator(s) ˆF 0 n of F and λ n (θ) for many different values of the constraint θ. Estimation and Testing p. 35/4

77 t Estimation and Testing p. 36/4 F_n(t)

78 t Estimation and Testing p. 37/4 F(t)

79 F_n(t) Wald type LRT based t Estimation and Testing p. 38/4

80 F_n(t) Keiding's LRT based t Estimation and Testing p. 39/4

81 8. Further problems Can we find the distribution of D analytically? Estimation and Testing p. 40/4

82 8. Further problems Can we find the distribution of D analytically? Does the same limit D arise as the limit distribution for the likelihood ratio test for a large class of such problems involving monotone functions? (Yes Banerjee (2005)) Estimation and Testing p. 40/4

83 8. Further problems Can we find the distribution of D analytically? Does the same limit D arise as the limit distribution for the likelihood ratio test for a large class of such problems involving monotone functions? (Yes Banerjee (2005)) Power? What is the appropriate contiguity theory? What is the limit distribution of the likelihood ratio statistic under local alternatives? See Banerjee and Wellner (2001), (2005b). Estimation and Testing p. 40/4

84 8. Further problems Can we find the distribution of D analytically? Does the same limit D arise as the limit distribution for the likelihood ratio test for a large class of such problems involving monotone functions? (Yes Banerjee (2005)) Power? What is the appropriate contiguity theory? What is the limit distribution of the likelihood ratio statistic under local alternatives? See Banerjee and Wellner (2001), (2005b). What happens if we constrain at k>1points? (Independence!) Estimation and Testing p. 40/4

85 8. Further problems Can we find the distribution of D analytically? Does the same limit D arise as the limit distribution for the likelihood ratio test for a large class of such problems involving monotone functions? (Yes Banerjee (2005)) Power? What is the appropriate contiguity theory? What is the limit distribution of the likelihood ratio statistic under local alternatives? See Banerjee and Wellner (2001), (2005b). What happens if we constrain at k>1points? (Independence!) Confidence bands for the whole monotone function F? Estimation and Testing p. 40/4

86 8. Further problems Can we find the distribution of D analytically? Does the same limit D arise as the limit distribution for the likelihood ratio test for a large class of such problems involving monotone functions? (Yes Banerjee (2005)) Power? What is the appropriate contiguity theory? What is the limit distribution of the likelihood ratio statistic under local alternatives? See Banerjee and Wellner (2001), (2005b). What happens if we constrain at k>1points? (Independence!) Confidence bands for the whole monotone function F? Confidence intervals (and bands?) for estimating a concave distribution function F? Estimation and Testing p. 40/4

The International Journal of Biostatistics

The International Journal of Biostatistics The International Journal of Biostatistics Volume 1, Issue 1 2005 Article 3 Score Statistics for Current Status Data: Comparisons with Likelihood Ratio and Wald Statistics Moulinath Banerjee Jon A. Wellner

More information

Score Statistics for Current Status Data: Comparisons with Likelihood Ratio and Wald Statistics

Score Statistics for Current Status Data: Comparisons with Likelihood Ratio and Wald Statistics Score Statistics for Current Status Data: Comparisons with Likelihood Ratio and Wald Statistics Moulinath Banerjee 1 and Jon A. Wellner 2 1 Department of Statistics, Department of Statistics, 439, West

More information

Likelihood Ratio Tests for Monotone Functions

Likelihood Ratio Tests for Monotone Functions Likelihood Ratio Tests for Monotone Functions Moulinath Banerjee 1 and Jon A. Wellner 2 University of Washington July 20, 2001 Abstract We study the problem of testing for equality at a fixed point in

More information

A Law of the Iterated Logarithm. for Grenander s Estimator

A Law of the Iterated Logarithm. for Grenander s Estimator A Law of the Iterated Logarithm for Grenander s Estimator Jon A. Wellner University of Washington, Seattle Probability Seminar October 24, 2016 Based on joint work with: Lutz Dümbgen and Malcolm Wolff

More information

Maximum likelihood: counterexamples, examples, and open problems

Maximum likelihood: counterexamples, examples, and open problems Maximum likelihood: counterexamples, examples, and open problems Jon A. Wellner University of Washington visiting Vrije Universiteit, Amsterdam Talk at BeNeLuxFra Mathematics Meeting 21 May, 2005 Email:

More information

Maximum likelihood: counterexamples, examples, and open problems. Jon A. Wellner. University of Washington. Maximum likelihood: p.

Maximum likelihood: counterexamples, examples, and open problems. Jon A. Wellner. University of Washington. Maximum likelihood: p. Maximum likelihood: counterexamples, examples, and open problems Jon A. Wellner University of Washington Maximum likelihood: p. 1/7 Talk at University of Idaho, Department of Mathematics, September 15,

More information

1 Introduction We shall consider likelihood ratio tests in a class of problems involving nonparametric estimation of a monotone function. The problem

1 Introduction We shall consider likelihood ratio tests in a class of problems involving nonparametric estimation of a monotone function. The problem Likelihood Ratio Tests for Monotone Functions Moulinath Banerjee 1 and Jon A. Wellner 2 University of Washington August 4, 2000 Abstract We study the problem of testing for equality at a fixed point in

More information

NONPARAMETRIC CONFIDENCE INTERVALS FOR MONOTONE FUNCTIONS. By Piet Groeneboom and Geurt Jongbloed Delft University of Technology

NONPARAMETRIC CONFIDENCE INTERVALS FOR MONOTONE FUNCTIONS. By Piet Groeneboom and Geurt Jongbloed Delft University of Technology NONPARAMETRIC CONFIDENCE INTERVALS FOR MONOTONE FUNCTIONS By Piet Groeneboom and Geurt Jongbloed Delft University of Technology We study nonparametric isotonic confidence intervals for monotone functions.

More information

STATISTICAL INFERENCE UNDER ORDER RESTRICTIONS LIMIT THEORY FOR THE GRENANDER ESTIMATOR UNDER ALTERNATIVE HYPOTHESES

STATISTICAL INFERENCE UNDER ORDER RESTRICTIONS LIMIT THEORY FOR THE GRENANDER ESTIMATOR UNDER ALTERNATIVE HYPOTHESES STATISTICAL INFERENCE UNDER ORDER RESTRICTIONS LIMIT THEORY FOR THE GRENANDER ESTIMATOR UNDER ALTERNATIVE HYPOTHESES By Jon A. Wellner University of Washington 1. Limit theory for the Grenander estimator.

More information

Likelihood Based Inference for Monotone Response Models

Likelihood Based Inference for Monotone Response Models Likelihood Based Inference for Monotone Response Models Moulinath Banerjee University of Michigan September 5, 25 Abstract The behavior of maximum likelihood estimates (MLE s) the likelihood ratio statistic

More information

Nonparametric estimation under shape constraints, part 1

Nonparametric estimation under shape constraints, part 1 Nonparametric estimation under shape constraints, part 1 Piet Groeneboom, Delft University August 6, 2013 What to expect? History of the subject Distribution of order restricted monotone estimates. Connection

More information

Likelihood Based Inference for Monotone Response Models

Likelihood Based Inference for Monotone Response Models Likelihood Based Inference for Monotone Response Models Moulinath Banerjee University of Michigan September 11, 2006 Abstract The behavior of maximum likelihood estimates (MLEs) and the likelihood ratio

More information

Nonparametric estimation under Shape Restrictions

Nonparametric estimation under Shape Restrictions Nonparametric estimation under Shape Restrictions Jon A. Wellner University of Washington, Seattle Statistical Seminar, Frejus, France August 30 - September 3, 2010 Outline: Five Lectures on Shape Restrictions

More information

Estimation of a k monotone density, part 3: limiting Gaussian versions of the problem; invelopes and envelopes

Estimation of a k monotone density, part 3: limiting Gaussian versions of the problem; invelopes and envelopes Estimation of a monotone density, part 3: limiting Gaussian versions of the problem; invelopes and envelopes Fadoua Balabdaoui 1 and Jon A. Wellner University of Washington September, 004 Abstract Let

More information

Nonparametric estimation of. s concave and log-concave densities: alternatives to maximum likelihood

Nonparametric estimation of. s concave and log-concave densities: alternatives to maximum likelihood Nonparametric estimation of s concave and log-concave densities: alternatives to maximum likelihood Jon A. Wellner University of Washington, Seattle Cowles Foundation Seminar, Yale University November

More information

Nonparametric estimation under shape constraints, part 2

Nonparametric estimation under shape constraints, part 2 Nonparametric estimation under shape constraints, part 2 Piet Groeneboom, Delft University August 7, 2013 What to expect? Theory and open problems for interval censoring, case 2. Same for the bivariate

More information

Spiking problem in monotone regression : penalized residual sum of squares

Spiking problem in monotone regression : penalized residual sum of squares Spiking prolem in monotone regression : penalized residual sum of squares Jayanta Kumar Pal 12 SAMSI, NC 27606, U.S.A. Astract We consider the estimation of a monotone regression at its end-point, where

More information

Nonparametric estimation: s concave and log-concave densities: alternatives to maximum likelihood

Nonparametric estimation: s concave and log-concave densities: alternatives to maximum likelihood Nonparametric estimation: s concave and log-concave densities: alternatives to maximum likelihood Jon A. Wellner University of Washington, Seattle Statistics Seminar, York October 15, 2015 Statistics Seminar,

More information

Nonparametric estimation of log-concave densities

Nonparametric estimation of log-concave densities Nonparametric estimation of log-concave densities Jon A. Wellner University of Washington, Seattle Seminaire, Institut de Mathématiques de Toulouse 5 March 2012 Seminaire, Toulouse Based on joint work

More information

Nonparametric estimation of log-concave densities

Nonparametric estimation of log-concave densities Nonparametric estimation of log-concave densities Jon A. Wellner University of Washington, Seattle Northwestern University November 5, 2010 Conference on Shape Restrictions in Non- and Semi-Parametric

More information

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function

More information

Nonparametric estimation under Shape Restrictions

Nonparametric estimation under Shape Restrictions Nonparametric estimation under Shape Restrictions Jon A. Wellner University of Washington, Seattle Statistical Seminar, Frejus, France August 30 - September 3, 2010 Outline: Five Lectures on Shape Restrictions

More information

Additive Isotonic Regression

Additive Isotonic Regression Additive Isotonic Regression Enno Mammen and Kyusang Yu 11. July 2006 INTRODUCTION: We have i.i.d. random vectors (Y 1, X 1 ),..., (Y n, X n ) with X i = (X1 i,..., X d i ) and we consider the additive

More information

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question A1 a) The marginal cdfs of F X,Y (x, y) = [1 + exp( x) + exp( y) + (1 α) exp( x y)] 1 are F X (x) = F X,Y (x, ) = [1

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Representing Gaussian Processes with Martingales

Representing Gaussian Processes with Martingales Representing Gaussian Processes with Martingales with Application to MLE of Langevin Equation Tommi Sottinen University of Vaasa Based on ongoing joint work with Lauri Viitasaari, University of Saarland

More information

Chernoff s distribution is log-concave. But why? (And why does it matter?)

Chernoff s distribution is log-concave. But why? (And why does it matter?) Chernoff s distribution is log-concave But why? (And why does it matter?) Jon A. Wellner University of Washington, Seattle University of Michigan April 1, 2011 Woodroofe Seminar Based on joint work with:

More information

Estimation of a k-monotone density, part 2: algorithms for computation and numerical results

Estimation of a k-monotone density, part 2: algorithms for computation and numerical results Estimation of a k-monotone density, part 2: algorithms for computation and numerical results Fadoua Balabdaoui 1 and Jon A. Wellner 2 University of Washington September 2, 2004 Abstract The iterative (2k

More information

Survival Analysis for Interval Censored Data - Nonparametric Estimation

Survival Analysis for Interval Censored Data - Nonparametric Estimation Survival Analysis for Interval Censored Data - Nonparametric Estimation Seminar of Statistics, ETHZ Group 8, 02. May 2011 Martina Albers, Nanina Anderegg, Urs Müller Overview Examples Nonparametric MLE

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Igor Cialenco. Department of Applied Mathematics Illinois Institute of Technology, USA joint with N.

Igor Cialenco. Department of Applied Mathematics Illinois Institute of Technology, USA joint with N. Parameter Estimation for Stochastic Navier-Stokes Equations Igor Cialenco Department of Applied Mathematics Illinois Institute of Technology, USA igor@math.iit.edu joint with N. Glatt-Holtz (IU) Asymptotical

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

SDE Coefficients. March 4, 2008

SDE Coefficients. March 4, 2008 SDE Coefficients March 4, 2008 The following is a summary of GARD sections 3.3 and 6., mainly as an overview of the two main approaches to creating a SDE model. Stochastic Differential Equations (SDE)

More information

Generalized Gaussian Bridges of Prediction-Invertible Processes

Generalized Gaussian Bridges of Prediction-Invertible Processes Generalized Gaussian Bridges of Prediction-Invertible Processes Tommi Sottinen 1 and Adil Yazigi University of Vaasa, Finland Modern Stochastics: Theory and Applications III September 1, 212, Kyiv, Ukraine

More information

Introduction to Maximum Likelihood Estimation

Introduction to Maximum Likelihood Estimation Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:

More information

Change Of Variable Theorem: Multiple Dimensions

Change Of Variable Theorem: Multiple Dimensions Change Of Variable Theorem: Multiple Dimensions Moulinath Banerjee University of Michigan August 30, 01 Let (X, Y ) be a two-dimensional continuous random vector. Thus P (X = x, Y = y) = 0 for all (x,

More information

1.1 Basis of Statistical Decision Theory

1.1 Basis of Statistical Decision Theory ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 1: Introduction Lecturer: Yihong Wu Scribe: AmirEmad Ghassami, Jan 21, 2016 [Ed. Jan 31] Outline: Introduction of

More information

On Reaching Nirvana (a.k.a. on Reaching a Steady State) Moshe Pollak Joint work (in progress) with Tom Hope

On Reaching Nirvana (a.k.a. on Reaching a Steady State) Moshe Pollak Joint work (in progress) with Tom Hope On Reaching Nirvana (a.k.a. on Reaching a Steady State) Moshe Pollak Joint work (in progress) with Tom Hope 1 What is Nirvana? "ביקש יעקב אבינו לישב בשלוה" רש"י, בראשית לז 2 the Pursuit of Happiness (US

More information

Estimation of a discrete monotone distribution

Estimation of a discrete monotone distribution Electronic Journal of Statistics Vol. 3 (009) 1567 1605 ISSN: 1935-754 DOI: 10.114/09-EJS56 Estimation of a discrete monotone distribution Hanna K. Jankowski Department of Mathematics and Statistics York

More information

Machine Learning Basics Lecture 2: Linear Classification. Princeton University COS 495 Instructor: Yingyu Liang

Machine Learning Basics Lecture 2: Linear Classification. Princeton University COS 495 Instructor: Yingyu Liang Machine Learning Basics Lecture 2: Linear Classification Princeton University COS 495 Instructor: Yingyu Liang Review: machine learning basics Math formulation Given training data x i, y i : 1 i n i.i.d.

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri

Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri Asymptotical distribution free test for parameter change in a diffusion model (joint work with Y. Nishiyama) Ilia Negri University of Bergamo (Italy) ilia.negri@unibg.it SAPS VIII, Le Mans 21-24 March,

More information

Introduction: exponential family, conjugacy, and sufficiency (9/2/13)

Introduction: exponential family, conjugacy, and sufficiency (9/2/13) STA56: Probabilistic machine learning Introduction: exponential family, conjugacy, and sufficiency 9/2/3 Lecturer: Barbara Engelhardt Scribes: Melissa Dalis, Abhinandan Nath, Abhishek Dubey, Xin Zhou Review

More information

A tailor made nonparametric density estimate

A tailor made nonparametric density estimate A tailor made nonparametric density estimate Daniel Carando 1, Ricardo Fraiman 2 and Pablo Groisman 1 1 Universidad de Buenos Aires 2 Universidad de San Andrés School and Workshop on Probability Theory

More information

Bi-s -Concave Distributions

Bi-s -Concave Distributions Bi-s -Concave Distributions Jon A. Wellner (Seattle) Non- and Semiparametric Statistics In Honor of Arnold Janssen, On the occasion of his 65th Birthday Non- and Semiparametric Statistics: In Honor of

More information

Shape constrained estimation: a brief introduction

Shape constrained estimation: a brief introduction Shape constrained estimation: a brief introduction Jon A. Wellner University of Washington, Seattle Cowles Foundation, Yale November 7, 2016 Econometrics lunch seminar, Yale Based on joint work with: Fadoua

More information

Estimating monotone, unimodal and U shaped failure rates using asymptotic pivots

Estimating monotone, unimodal and U shaped failure rates using asymptotic pivots Estimating monotone, unimodal and U shaped failure rates using asymptotic pivots Moulinath Banerjee University of Michigan March 17, 2006 Abstract We propose a new method for pointwise estimation of monotone,

More information

Nonparametric Inference In Functional Data

Nonparametric Inference In Functional Data Nonparametric Inference In Functional Data Zuofeng Shang Purdue University Joint work with Guang Cheng from Purdue Univ. An Example Consider the functional linear model: Y = α + where 1 0 X(t)β(t)dt +

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations Research

More information

Stat 426 : Homework 1.

Stat 426 : Homework 1. Stat 426 : Homework 1. Moulinath Banerjee University of Michigan Announcement: The homework carries 120 points and contributes 10 points to the total grade. (1) A geometric random variable W takes values

More information

Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs

Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs Asymptotic Properties of an Approximate Maximum Likelihood Estimator for Stochastic PDEs M. Huebner S. Lototsky B.L. Rozovskii In: Yu. M. Kabanov, B. L. Rozovskii, and A. N. Shiryaev editors, Statistics

More information

Inference for constrained estimation of tumor size distributions

Inference for constrained estimation of tumor size distributions Inference for constrained estimation of tumor size distributions Debashis Ghosh 1, Moulinath Banerjee 2 and Pinaki Biswas 3 1 Department of Statistics and Huck Institute of Life Sciences, Penn State University

More information

Optimum CUSUM Tests for Detecting Changes in Continuous Time Processes

Optimum CUSUM Tests for Detecting Changes in Continuous Time Processes Optimum CUSUM Tests for Detecting Changes in Continuous Time Processes George V. Moustakides INRIA, Rennes, France Outline The change detection problem Overview of existing results Lorden s criterion and

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Review Quiz 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Cramér Rao lower bound (CRLB). That is, if where { } and are scalars, then

More information

Sequential Detection. Changes: an overview. George V. Moustakides

Sequential Detection. Changes: an overview. George V. Moustakides Sequential Detection of Changes: an overview George V. Moustakides Outline Sequential hypothesis testing and Sequential detection of changes The Sequential Probability Ratio Test (SPRT) for optimum hypothesis

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

12 - Nonparametric Density Estimation

12 - Nonparametric Density Estimation ST 697 Fall 2017 1/49 12 - Nonparametric Density Estimation ST 697 Fall 2017 University of Alabama Density Review ST 697 Fall 2017 2/49 Continuous Random Variables ST 697 Fall 2017 3/49 1.0 0.8 F(x) 0.6

More information

Isotonic regression in deterministic and random design settings

Isotonic regression in deterministic and random design settings Isotonic regression in deterministic and random design settings Viktor Jönsson January 24, 2015 1 Acknowledgements I would like to thank my supervisor Dragi Anevski for his endless support, patience and

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1)

Question 1. The correct answers are: (a) (2) (b) (1) (c) (2) (d) (3) (e) (2) (f) (1) (g) (2) (h) (1) Question 1 The correct answers are: a 2 b 1 c 2 d 3 e 2 f 1 g 2 h 1 Question 2 a Any probability measure Q equivalent to P on F 2 can be described by Q[{x 1, x 2 }] := q x1 q x1,x 2, 1 where q x1, q x1,x

More information

Lecture 5: Importance sampling and Hamilton-Jacobi equations

Lecture 5: Importance sampling and Hamilton-Jacobi equations Lecture 5: Importance sampling and Hamilton-Jacobi equations Henrik Hult Department of Mathematics KTH Royal Institute of Technology Sweden Summer School on Monte Carlo Methods and Rare Events Brown University,

More information

Minimax theory for a class of non-linear statistical inverse problems

Minimax theory for a class of non-linear statistical inverse problems Minimax theory for a class of non-linear statistical inverse problems Kolyan Ray and Johannes Schmidt-Hieber Leiden University Abstract We study a class of statistical inverse problems with non-linear

More information

arxiv: v3 [math.st] 4 Jun 2018

arxiv: v3 [math.st] 4 Jun 2018 Inference for the mode of a log-concave density arxiv:1611.1348v3 [math.st] 4 Jun 218 Charles R. Doss and Jon A. Wellner School of Statistics University of Minnesota Minneapolis, MN 55455 e-mail: cdoss@stat.umn.edu

More information

8. Constrained Optimization

8. Constrained Optimization 8. Constrained Optimization Daisuke Oyama Mathematics II May 11, 2018 Unconstrained Maximization Problem Let X R N be a nonempty set. Definition 8.1 For a function f : X R, x X is a (strict) local maximizer

More information

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin

ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE. By Michael Nussbaum Weierstrass Institute, Berlin The Annals of Statistics 1996, Vol. 4, No. 6, 399 430 ASYMPTOTIC EQUIVALENCE OF DENSITY ESTIMATION AND GAUSSIAN WHITE NOISE By Michael Nussbaum Weierstrass Institute, Berlin Signal recovery in Gaussian

More information

The median of an exponential family and the normal law

The median of an exponential family and the normal law The median of an exponential family and the normal law arxiv:78.3789v2 [math.pr] 3 Oct 27 Gérard Letac, Lutz Mattner, Mauro Piccioni February 2, 28 Abstract Let P be a probability on the real line generating

More information

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS SOME CONVERSE LIMIT THEOREMS OR EXCHANGEABLE BOOTSTRAPS Jon A. Wellner University of Washington The bootstrap Glivenko-Cantelli and bootstrap Donsker theorems of Giné and Zinn (990) contain both necessary

More information

1 Glivenko-Cantelli type theorems

1 Glivenko-Cantelli type theorems STA79 Lecture Spring Semester Glivenko-Cantelli type theorems Given i.i.d. observations X,..., X n with unknown distribution function F (t, consider the empirical (sample CDF ˆF n (t = I [Xi t]. n Then

More information

Chapter 4: Asymptotic Properties of the MLE

Chapter 4: Asymptotic Properties of the MLE Chapter 4: Asymptotic Properties of the MLE Daniel O. Scharfstein 09/19/13 1 / 1 Maximum Likelihood Maximum likelihood is the most powerful tool for estimation. In this part of the course, we will consider

More information

EMPIRICAL ENVELOPE MLE AND LR TESTS. Mai Zhou University of Kentucky

EMPIRICAL ENVELOPE MLE AND LR TESTS. Mai Zhou University of Kentucky EMPIRICAL ENVELOPE MLE AND LR TESTS Mai Zhou University of Kentucky Summary We study in this paper some nonparametric inference problems where the nonparametric maximum likelihood estimator (NPMLE) are

More information

Maximum likelihood estimation of a log-concave density based on censored data

Maximum likelihood estimation of a log-concave density based on censored data Maximum likelihood estimation of a log-concave density based on censored data Dominic Schuhmacher Institute of Mathematical Statistics and Actuarial Science University of Bern Joint work with Lutz Dümbgen

More information

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2)

Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Some Terminology and Concepts that We will Use, But Not Emphasize (Section 6.2) Statistical analysis is based on probability theory. The fundamental object in probability theory is a probability space,

More information

Clustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.

Clustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014. Clustering K-means Machine Learning CSE546 Carlos Guestrin University of Washington November 4, 2014 1 Clustering images Set of Images [Goldberger et al.] 2 1 K-means Randomly initialize k centers µ (0)

More information

Statistical Estimation: Data & Non-data Information

Statistical Estimation: Data & Non-data Information Statistical Estimation: Data & Non-data Information Roger J-B Wets University of California, Davis & M.Casey @ Raytheon G.Pflug @ U. Vienna, X. Dong @ EpiRisk, G-M You @ EpiRisk. a little background Decision

More information

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood

More information

On the Goodness-of-Fit Tests for Some Continuous Time Processes

On the Goodness-of-Fit Tests for Some Continuous Time Processes On the Goodness-of-Fit Tests for Some Continuous Time Processes Sergueï Dachian and Yury A. Kutoyants Laboratoire de Mathématiques, Université Blaise Pascal Laboratoire de Statistique et Processus, Université

More information

Random fraction of a biased sample

Random fraction of a biased sample Random fraction of a biased sample old models and a new one Statistics Seminar Salatiga Geurt Jongbloed TU Delft & EURANDOM work in progress with Kimberly McGarrity and Jilt Sietsma September 2, 212 1

More information

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

arxiv:math/ v2 [math.st] 17 Jun 2008

arxiv:math/ v2 [math.st] 17 Jun 2008 The Annals of Statistics 2008, Vol. 36, No. 3, 1031 1063 DOI: 10.1214/009053607000000974 c Institute of Mathematical Statistics, 2008 arxiv:math/0609020v2 [math.st] 17 Jun 2008 CURRENT STATUS DATA WITH

More information

5.1 Approximation of convex functions

5.1 Approximation of convex functions Chapter 5 Convexity Very rough draft 5.1 Approximation of convex functions Convexity::approximation bound Expand Convexity simplifies arguments from Chapter 3. Reasons: local minima are global minima and

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Competing statistics in exponential family regression models under certain shape constraints

Competing statistics in exponential family regression models under certain shape constraints Competing statistics in exponential family regression models under certain shape constraints Moulinath Banerjee 1 University of Michigan April 24, 2006 Abstract We study a number of competing statistics

More information

More Empirical Process Theory

More Empirical Process Theory More Empirical Process heory 4.384 ime Series Analysis, Fall 2008 Recitation by Paul Schrimpf Supplementary to lectures given by Anna Mikusheva October 24, 2008 Recitation 8 More Empirical Process heory

More information

What happens when there are many agents? Threre are two problems:

What happens when there are many agents? Threre are two problems: Moral Hazard in Teams What happens when there are many agents? Threre are two problems: i) If many agents produce a joint output x, how does one assign the output? There is a free rider problem here as

More information

Asymptotic normality of the L k -error of the Grenander estimator

Asymptotic normality of the L k -error of the Grenander estimator Asymptotic normality of the L k -error of the Grenander estimator Vladimir N. Kulikov Hendrik P. Lopuhaä 1.7.24 Abstract: We investigate the limit behavior of the L k -distance between a decreasing density

More information

Today. Calculus. Linear Regression. Lagrange Multipliers

Today. Calculus. Linear Regression. Lagrange Multipliers Today Calculus Lagrange Multipliers Linear Regression 1 Optimization with constraints What if I want to constrain the parameters of the model. The mean is less than 10 Find the best likelihood, subject

More information

Ambiguity and Information Processing in a Model of Intermediary Asset Pricing

Ambiguity and Information Processing in a Model of Intermediary Asset Pricing Ambiguity and Information Processing in a Model of Intermediary Asset Pricing Leyla Jianyu Han 1 Kenneth Kasa 2 Yulei Luo 1 1 The University of Hong Kong 2 Simon Fraser University December 15, 218 1 /

More information

Analysis methods of heavy-tailed data

Analysis methods of heavy-tailed data Institute of Control Sciences Russian Academy of Sciences, Moscow, Russia February, 13-18, 2006, Bamberg, Germany June, 19-23, 2006, Brest, France May, 14-19, 2007, Trondheim, Norway PhD course Chapter

More information

Inverse Brascamp-Lieb inequalities along the Heat equation

Inverse Brascamp-Lieb inequalities along the Heat equation Inverse Brascamp-Lieb inequalities along the Heat equation Franck Barthe and Dario Cordero-Erausquin October 8, 003 Abstract Adapting Borell s proof of Ehrhard s inequality for general sets, we provide

More information

Composite Hypotheses and Generalized Likelihood Ratio Tests

Composite Hypotheses and Generalized Likelihood Ratio Tests Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve

More information

Translation Invariant Experiments with Independent Increments

Translation Invariant Experiments with Independent Increments Translation Invariant Statistical Experiments with Independent Increments (joint work with Nino Kordzakhia and Alex Novikov Steklov Mathematical Institute St.Petersburg, June 10, 2013 Outline 1 Introduction

More information

Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta

Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta Boundary Correction Methods in Kernel Density Estimation Tom Alberts C o u(r)a n (t) Institute joint work with R.J. Karunamuni University of Alberta November 29, 2007 Outline Overview of Kernel Density

More information

A Semiparametric Binary Regression Model Involving Monotonicity Constraints

A Semiparametric Binary Regression Model Involving Monotonicity Constraints doi: 10.1111/j.1467-9469.2006.00499.x Published by Blackwell Publishing Ltd, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA, 2006 A Semiparametric Binary Regression

More information

Descent methods. min x. f(x)

Descent methods. min x. f(x) Gradient Descent Descent methods min x f(x) 5 / 34 Descent methods min x f(x) x k x k+1... x f(x ) = 0 5 / 34 Gradient methods Unconstrained optimization min f(x) x R n. 6 / 34 Gradient methods Unconstrained

More information

Graduate Econometrics I: Maximum Likelihood I

Graduate Econometrics I: Maximum Likelihood I Graduate Econometrics I: Maximum Likelihood I Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood

More information

Monitoring actuarial assumptions in life insurance

Monitoring actuarial assumptions in life insurance Monitoring actuarial assumptions in life insurance Stéphane Loisel ISFA, Univ. Lyon 1 Joint work with N. El Karoui & Y. Salhi IAALS Colloquium, Barcelona, 17 LoLitA Typical paths with change of regime

More information