Asymptotic statistics using the Functional Delta Method

Size: px
Start display at page:

Download "Asymptotic statistics using the Functional Delta Method"

Transcription

1 Quantiles, Order Statistics and L-Statsitics TU Kaiserslautern 15. Februar 2015

2 Motivation Functional The delta method introduced in chapter 3 is an useful technique to turn the weak convergence of random vectors r n(t n θ) into weak convergence of transformations r n(φ(t n) φ(θ))

3 Motivation Functional The delta method introduced in chapter 3 is an useful technique to turn the weak convergence of random vectors r n(t n θ) into weak convergence of transformations r n(φ(t n) φ(θ)) Goal: Expand the technique to arbitrary normed spaces

4 Motivation Functional The delta method introduced in chapter 3 is an useful technique to turn the weak convergence of random vectors r n(t n θ) into weak convergence of transformations r n(φ(t n) φ(θ)) Goal: Expand the technique to arbitrary normed spaces Example: Apply the method to the empirical process n(fn F ), which converges in the space D[, ] to an F Brownian Bridge. Hence, we hope to derive the asymptotic distribution of transformations n(φ(fn) φ(f )).

5 Table of Contents Functional 1 Functional Heuristic Ansatz Types of differentiability 2 Asymptotic Normality Order Statistics Extreme Values 3 Definition and examples

6 Table of Contents Functional Heuristic Ansatz Types of differentiability 1 Functional Heuristic Ansatz Types of differentiability 2 Asymptotic Normality Order Statistics Extreme Values 3 Definition and examples

7 Heuristic Ansatz Types of differentiability For a statistic of the form φ(f n) we want to find out the asymptotic distribution with something like a Taylor expansion: φ(f n) φ(f ) = φ F (F n F ) + R n, where the remainder is hopefully small.

8 Heuristic Ansatz Types of differentiability For a statistic of the form φ(f n) we want to find out the asymptotic distribution with something like a Taylor expansion: φ(f n) φ(f ) = φ F (F n F ) + R n, where the remainder is hopefully small. If φ takes values in R, define φ F (G) as the ordinary derivative of the function t φ(f + tg) for a fixed pertubation G. Then, by Taylor s theorem φ(f + tg) φ(f ) = tφ F (G) + o(t 2 ).

9 Heuristic Ansatz Types of differentiability For a statistic of the form φ(f n) we want to find out the asymptotic distribution with something like a Taylor expansion: φ(f n) φ(f ) = φ F (F n F ) + R n, where the remainder is hopefully small. If φ takes values in R, define φ F (G) as the ordinary derivative of the function t φ(f + tg) for a fixed pertubation G. Then, by Taylor s theorem φ(f + tg) φ(f ) = tφ F (G) + o(t 2 ). Subtituting t = 1 n, G = G n, for the empirical process G n = n(f n F ) yields the von Mises expansion φ(f n) φ(f ) = 1 φ F (G n) n m! 1 φ(m) nm/2 F (Gn) +.

10 Heuristic Ansatz Types of differentiability For a statistic of the form φ(f n) we want to find out the asymptotic distribution with something like a Taylor expansion: φ(f n) φ(f ) = φ F (F n F ) + R n, where the remainder is hopefully small. If φ takes values in R, define φ F (G) as the ordinary derivative of the function t φ(f + tg) for a fixed pertubation G. Then, by Taylor s theorem φ(f + tg) φ(f ) = tφ F (G) + o(t 2 ). Subtituting t = 1 n, G = G n, for the empirical process G n = n(f n F ) yields the von Mises expansion φ(f n) φ(f ) = 1 φ F (G n) n m! 1 φ(m) nm/2 F (Gn) +. But: Because F n depends on n, this is not a legal choice for G and there is no guarentee that the remainder is small!

11 von Mises expansion for m=1 Heuristic Ansatz Types of differentiability However, it is reasonable to make the Ansatz φ(f n) φ(f ) 1 n φ F (G N ) = 1 n n φ F (δ Xi F ). If the random variables φ F (δ Xi F ) have zero mean and finite variance we may expect that n(φ(f n) φ(f )) is asymptotically normal. i=1

12 von Mises expansion for m=1 Heuristic Ansatz Types of differentiability However, it is reasonable to make the Ansatz φ(f n) φ(f ) 1 n φ F (G N ) = 1 n n φ F (δ Xi F ). If the random variables φ F (δ Xi F ) have zero mean and finite variance we may expect that n(φ(f n) φ(f )) is asymptotically normal. The function x φ F (δ X F ) is the so called influence function of the function φ: φ F (δ x F ) = d dt φ ((1 t)f + tδ x). t=0 In robust statistics we are looking for estimators with unbounded influence function. i=1

13 Example (Mean) Functional Heuristic Ansatz Types of differentiability Consider the sample mean X N and the mean function φ(f ) = x df. Then, φ(f n) = x df n = X n. Hence, φ F (δ x F ) = d dt t=0 x d[(1 t)f + tδ x](s) = x s df (s). Note, that the influence function is unbounded and the sample mean is not robust.

14 Example (Quantiles) Functional Heuristic Ansatz Types of differentiability For fixed p (0, 1) we want to estimate the p quantile of F with its empirical counterpart: ˆp = F 1 n (p) = φ(f n) with φ(f ) = F 1 (p).

15 Example (Quantiles) Functional Heuristic Ansatz Types of differentiability For fixed p (0, 1) we want to estimate the p quantile of F with its empirical counterpart: ˆp = F 1 n (p) = φ(f n) with φ(f ) = F 1 (p). To derive the influence function look at the identity p = F tf 1 t where F t = (1 t)f + tδ x. (p) = (1 t)f (F 1 (p)) + tδ x(f 1 (p)), Differentiation of both sides wrt. t yields 0 = f (F 1 (p)) d dt F t 1 (p) F (F 1 (p)) + δ x(f 1 (p)). t=0 t t

16 Heuristic Ansatz Types of differentiability Because d F 1 dt t (p) t=0 = φ 1 F (δx F ) we can solve this equation and derive the influence function φ 1 F (δ x F ) = p 1 [x, )(F 1 (p)) f (F 1 (p)) With the von Mises expansion we get the approximation n(f 1 n (p) F 1 (p)) 1 n what is asymptotically normal by the CLT. n i=1 p 1 [x, ) (F 1 (p)), f (F 1 (p))

17 Heuristic Ansatz Types of differentiability Definition (Gateaux differentiability) A map φ : D E is called Gateaux differentiable at θ D if there exists a continous, linear map φ θ : D E such that for every fixed h D φ(θ + th) φ(θ) φ θ(h) t 0, E as t 0.

18 Heuristic Ansatz Types of differentiability Definition (Gateaux differentiability) A map φ : D E is called Gateaux differentiable at θ D if there exists a continous, linear map φ θ : D E such that for every fixed h D φ(θ + th) φ(θ) φ θ(h) t 0, E as t 0. Definition (Hadamard differentiability) A map φ : D 0 E is called Hadamard differentiable at θ if there exists a continous, linear map φ 0 : D E such that φ(θ + tht) φ(θ) φ θ(h) t 0, E as t 0, for every h t h. If the map does not exist on the whole space D but on a subset D 0, then φ is called Hadamard differentiable tangentially to D 0.

19 Heuristic Ansatz Types of differentiability Definition (Frechet differentiability) A map φ : D E is called Frechet differentiable at θ D if there exists a continous, linear map φ θ : D E such that for every h D φ(θ + h) φ(θ) φ θ(h) = o( h ), E as h 0.

20 Heuristic Ansatz Types of differentiability Definition (Frechet differentiability) A map φ : D E is called Frechet differentiable at θ D if there exists a continous, linear map φ θ : D E such that for every h D φ(θ + h) φ(θ) φ θ(h) = o( h ), E as h 0. In statistical applications, Frechet differentiability may not hold, whereas Hadamard differentiability does. Hadamard and Frechet differentiability are equivalent when D = R k.

21 Heuristic Ansatz Types of differentiability Theorem () Let D and E be normed linear spaces. Let φ : D φ D E be Hadamard differentiable at θ tangentially to D 0. Let T n : Ω D φ be maps such that r n(t n θ) T for some sequence of numbers r n and a random variable T with values in D 0. Then r n(φ(t n) φ(θ)) φ θ(t ). If φ θ can be extend to a continous map on the whole space D, then we also have r n(φ(t n) φ(θ)) = φ θ(r n(t n θ)) + o P (1).

22 Heuristic Ansatz Types of differentiability Theorem (Chain rule) Let φ : D φ E ψ and ψ : E ψ F be maps defined on subsets D φ and E ψ of normed spaces D and E. Let φ be Hadamard-differentiable at θ tangentially to D 0 and let ψ be Hadamard-differentiable at φ(θ) tangentially to φ 0(D 0). Then ψ φ : D φ F is Hadamard-differentiable at θ tangentially to D 0 with derivative ψ φ(θ) φ θ.

23 Nelson-Aalen estimator Functional Heuristic Ansatz Types of differentiability (1) Goal: Estimate the distribution function F of a random sample of failure times T 1,..., T n. (2) Problem: The obeserved data (X i, i ) are right-censored: X i = T i C i with the censoring time C i and i = 1 {Ti C i } records whether T i is censored or not. (3) The cumulative hazard function can be written as 1 1 Λ(t) = df = dh 1, 1 F 1 H [0,t] with H the distribution function of X i and H 1(x) = P(X i x, i = 1) (4) Estimating H, H 1 by their sample counterparts H n(x) = 1 n n i=1 1 {X i x}, H 1n = 1 n n i=1 1 {X i x, i =1} yields the Nelson-Aalen estimator ˆΛ n(t) = [0,t] [0,t] 1 1 H n dh 1n

24 Table of Contents Functional Asymptotic Normality Order Statistics Extreme Values 1 Functional Heuristic Ansatz Types of differentiability 2 Asymptotic Normality Order Statistics Extreme Values 3 Definition and examples

25 Asymptotic Normality Order Statistics Extreme Values The quantile function of a cumulative distribution function F is defined by F 1 : (0, 1) R, F 1 (p) = inf{x : F (x) p}.

26 Asymptotic Normality Order Statistics Extreme Values The quantile function of a cumulative distribution function F is defined by F 1 : (0, 1) R, F 1 (p) = inf{x : F (x) p}. The empirical quantile function is related to the order statistics through F 1 n (p) = X ( np )

27 Asymptotic Normality Order Statistics Extreme Values The quantile function of a cumulative distribution function F is defined by F 1 : (0, 1) R, F 1 (p) = inf{x : F (x) p}. The empirical quantile function is related to the order statistics through F 1 n (p) = X ( np ) Goal Show the asymptotic normality of the sequence n(f 1 n (p) F 1 (p)).

28 Asymptotic Normality Order Statistics Extreme Values To derive the asymptotic distribution, we have to investigate the Hadamard-differentiability of the map φ : F F 1 (p).

29 Asymptotic Normality Order Statistics Extreme Values To derive the asymptotic distribution, we have to investigate the Hadamard-differentiability of the map φ : F F 1 (p). The delta method is not restricted to empirical quantiles but can also be applied to other estimators of F.

30 Asymptotic Normality Order Statistics Extreme Values To derive the asymptotic distribution, we have to investigate the Hadamard-differentiability of the map φ : F F 1 (p). The delta method is not restricted to empirical quantiles but can also be applied to other estimators of F. For a nondecreasing F D[a, b] with [a, b] [, ], let φ(f ) [a, b] be an arbitrary point such that F (φ(f ) ) p F (φ(f )). The domain D φ of the resulting map φ is the set of all F such that there exists a solution to the inequalities.

31 Asymptotic Normality Order Statistics Extreme Values Lemma Let F D φ be differentiable at a point ζ p (a, b) such that F (ζ p) = p, with positive derivative. Then, φ : D φ D[a, b] R is Hadamard-differentiable at F tangentially to the set of functions h D[a, b] that are continous at ζ p with derivative φ F (h) = h(ζ p)/f (ζ p).

32 Asymptotic Normality Order Statistics Extreme Values Lemma Let F D φ be differentiable at a point ζ p (a, b) such that F (ζ p) = p, with positive derivative. Then, φ : D φ D[a, b] R is Hadamard-differentiable at F tangentially to the set of functions h D[a, b] that are continous at ζ p with derivative φ F (h) = h(ζ p)/f (ζ p). Corollary Let 0 < p < 1. If F is differentiable at F 1 (p) with positive derivative f (F 1 (p)), then n(f 1 n (p) F 1 (p)) = 1 n n 1 {Xi F 1 (p)} p + o p(1). f (F 1 (p)) Hence, the asymptotic distribution of the sequence is N (0, σ 2 ) with variance σ 2 = p(1 p)/f 2 (F 1 (p)). i=1

33 Asymptotic Normality Order Statistics Extreme Values Example We estimate the 0.1 quantil of a Weibull(200, 2) distribution N=100 N=500 Histogram of estimate Histogram of estimate Density Density estimate estimate Normal Q Q Plot Normal Q Q Plot Sample Quantiles Sample Quantiles Theoretical Quantiles Theoretical Quantiles

34 Asymptotic Normality Order Statistics Extreme Values Consider not only a single quantile, but the function F (F 1 (p)) p1 <p<p 2 for fixed numbers 0 p 1 < p 2 1.

35 Asymptotic Normality Order Statistics Extreme Values Consider not only a single quantile, but the function F (F 1 (p)) p1 <p<p 2 for fixed numbers 0 p 1 < p 2 1. Because any quantile function is bounded on [p 1, p 2] (0, 1) we may hope to strenghen the result to convergence in l (p 1, p 2). If the distribution has compact support we are able to show weak convergence in l (0, 1).

36 Asymptotic Normality Order Statistics Extreme Values Lemma Given an interval [a, b] R, let D 1 be the set of all restrictions of distribution functions on R to [a, b]. Let D 2 be the subset of D 1 of distribution functions of measures that give mass 1 to (a, b]. (i) Let 0 < p 1 < p 2 < 1 and let F be continously differentiable on the interval [a, b] = [F 1 (p 1) ɛ, F 1 (p 2) + ɛ] for some ɛ > 0, with strictly positive derivative f. Then the inverse map G G 1 as a map D 1 D[a, b] l [p 1, p 2] is Hadamard differentiable at F tangentially to C[a, b]. (ii) Let F have compact support [a, b] and be continously differentiable on its support with strictly positive derivative f. Then G 1 as a map D 2 D[a, b] l (0, 1) is Hadamard differentiable at F tangentially to C[a, b]. In both cases the derivative is the map h (h/f ) F 1.

37 Asymptotic Normality Order Statistics Extreme Values Corollary (i) Let 0 < p 1 < p 2 < 1. If F is differentiable on the interval [a, b] = [F 1 (p 1) ɛ, F 1 (p 2) + ɛ] for some ɛ > 0 with positive derivative f. Then, the sequence n(f 1 n (p) F 1 (p)) converges weakly in l (p 1, p 2). (ii) If F has compact support and is continously differentiable on its support with stricly positive derivative f. Then, the result can be strenghten to weak convergence in l (0, 1). In both cases the limit process is G λ /f (F 1 (p)), where G λ is a standard Brownian bridge.

38 Order Statistics Functional Asymptotic Normality Order Statistics Extreme Values In estimating a quantile, we could also use the order statistics directly.

39 Order Statistics Functional Asymptotic Normality Order Statistics Extreme Values In estimating a quantile, we could also use the order statistics directly. For X (kn) be a consistent estimator for F 1 one needs minimally kn p. n To ensure that F 1 n (p) and X (kn) are asymptotic equivalent we also need k nn p faster than 1 n.

40 Order Statistics Functional Asymptotic Normality Order Statistics Extreme Values In estimating a quantile, we could also use the order statistics directly. For X (kn) be a consistent estimator for F 1 one needs minimally kn p. n To ensure that F 1 n (p) and X (kn) are asymptotic equivalent we also need k nn p faster than 1 n. Lemma Let F be differentiable at F 1 (p) with positive derivative and let k nn = p + c n + o( 1 n ). Then n(x(kn) F 1 n (p)) p c f (F 1 (p).

41 Asymptotic Normality Order Statistics Extreme Values Example Goal: Estimate a confidence interval using the order statistics, e.g. determine k N, l N N such that P(X (k) < F 1 (p) < X (l) ) 1 α

42 Asymptotic Normality Order Statistics Extreme Values Example Goal: Estimate a confidence interval using the order statistics, e.g. determine k N, l N N such that P(X (k) < F 1 (p) < X (l) ) 1 α Ansatz: P(X (k) < F 1 (p) < X (l) ) = P(U (k) < p < U (l) ).

43 Asymptotic Normality Order Statistics Extreme Values Example Goal: Estimate a confidence interval using the order statistics, e.g. determine k N, l N N such that P(X (k) < F 1 (p) < X (l) ) 1 α Ansatz: Set P(X (k) < F 1 (p) < X (l) ) = P(U (k) < p < U (l) ). k, l p(1 p) N = p ± z 1 α. 2 N

44 Asymptotic Normality Order Statistics Extreme Values Example Goal: Estimate a confidence interval using the order statistics, e.g. determine k N, l N N such that P(X (k) < F 1 (p) < X (l) ) 1 α Ansatz: P(X (k) < F 1 (p) < X (l) ) = P(U (k) < p < U (l) ). Set k, l p(1 p) N = p ± z 1 α. 2 N Using the preceeding Lemma to obtain ( ) p(1 p) U (k) = G 1 1 N (p) ± z α + o P N N

45 Asymptotic Normality Order Statistics Extreme Values Example Goal: Estimate a confidence interval using the order statistics, e.g. determine k N, l N N such that P(X (k) < F 1 (p) < X (l) ) 1 α Ansatz: P(X (k) < F 1 (p) < X (l) ) = P(U (k) < p < U (l) ). Set k, l p(1 p) N = p ± z 1 α. 2 N Using the preceeding Lemma to obtain ( ) p(1 p) U (k) = G 1 1 N (p) ± z α + o P N N Hence, U (k) < p < U (l) is asymptotically equivalent to G N 1 N (p) p z 1 α p(1 p) 2. The probability of this event converges to 1 α.

46 Asymptotic Normality Order Statistics Extreme Values Example (Weibull distribution) Estimate a confidence interval for the 0.1 quantil of a Weibull(200, 2) distribution. Simulation of 1000 intervals (theoretical quantil: 32.46): N empirical interval Coverage probability 50 [12.65,45.98] 97.2% 100 [21.86,40.99] 93.5% 1000 [29.12,35.45] 94.9% [31.46,33.48] 95.3%

47 Extreme Values Functional Asymptotic Normality Order Statistics Extreme Values Study the asymptotic behaviour of extreme order statistics (for example X (kn) with kn 0 or 1). n

48 Extreme Values Functional Asymptotic Normality Order Statistics Extreme Values Study the asymptotic behaviour of extreme order statistics (for example X (kn) with kn 0 or 1). n For x R it holds P(X (kn) x) = P(Y n k n), where Y Binomial(n, p n) with p n = P(X i > x).

49 Extreme Values Functional Asymptotic Normality Order Statistics Extreme Values Study the asymptotic behaviour of extreme order statistics (for example X (kn) with kn 0 or 1). n For x R it holds P(X (kn) x) = P(Y n k n), where Y Binomial(n, p n) with p n = P(X i > x). Hence, the limit distributions of general order statistics can be derived from approximations to the binomial distribution.

50 Asymptotic Normality Order Statistics Extreme Values Consider the extreme cases k n = n k for fixed k and start with the maximum X (n).

51 Asymptotic Normality Order Statistics Extreme Values Consider the extreme cases k n = n k for fixed k and start with the maximum X (n). The distribution of the maximum is P(X (n) x) = F (x n) n = ( 1 ns(x) ) n, n where S is the survival function S(x) = P(X i > x).

52 Asymptotic Normality Order Statistics Extreme Values Consider the extreme cases k n = n k for fixed k and start with the maximum X (n). The distribution of the maximum is P(X (n) x) = F (x n) n = ( 1 ns(x) ) n, n where S is the survival function S(x) = P(X i > x). Lemma For any sequence x n and any τ > 0, we have P(X (n) x n) e τ if and only if ns(x n) τ.

53 Asymptotic Normality Order Statistics Extreme Values Goal: Find constants a n and b n > 0 such that bn 1 (X (n) a n) converges to a nontrivial limit. In view of the lemma we must choose a n and b n such that S(a n + b nx) = O( 1 n ). The set of possible limit distributions is extremely small:

54 Asymptotic Normality Order Statistics Extreme Values Goal: Find constants a n and b n > 0 such that bn 1 (X (n) a n) converges to a nontrivial limit. In view of the lemma we must choose a n and b n such that S(a n + b nx) = O( 1 n ). The set of possible limit distributions is extremely small: Theorem (Extreme Value Distributions) Let b 1 N (X (N) a N ) G for a nondegenerate distribution G. Then G belongs to the location-scale family of a distribution of one of the following forms: (i) e e x with support R. (ii) e ( 1 x α ) with support [0, ) and α > 0. (iii) e ( x)α with support (, 0] and α > 0.

55 Asymptotic Normality Order Statistics Extreme Values Example (Normal-distribution) Set a n = 2log(n) 1 log(log(n))+log(4π) and b 2 n = 1. Using Mill s ratio, 2 log(n) 2 log(n) which asserts that ψ(t) ψ(t) t as t to see that nψ(a n + b nx) e x for every x. Hence, 2 log(n)(x (n) an ) converges to a limit of type (i).

56 Asymptotic Normality Order Statistics Extreme Values Theorem Let τ F = sup{t : F (t) < 1}. Then, there exist constants a N and b N such that the sequence b 1 N (X (N) a N ) converges in distribution if and only if, as t τ F, (i) There exists a strictly positive function g on R such that S(t+g(t)x) S(t) e x, for every x R; (ii) τ F = and S(tx) S(t) x α, for every x > 0; (iii) τ F < and S(τ F (τ F t)x) S(t) x α, for every x > 0. The constants (a N, b N ) can be taken equal to (u N, g(u N )), (0, u N ) and (τ F, τ F u N ), for u N = F 1 (1 1 N ).

57 Asymptotic Normality Order Statistics Extreme Values The convergence of the maximum X (i) implies the weak convergence of X (n k). Theorem Let b 1 N (X (N) a N ) G. Then b 1 (X (N k) a N ) H for the distribution function H(x) = G(x) k i=0 N ( logg(x)) i. i! The limit distribution follows from the Poisson approximation to the binomial distribution.

58 Table of Contents Functional Definition and examples 1 Functional Heuristic Ansatz Types of differentiability 2 Asymptotic Normality Order Statistics Extreme Values 3 Definition and examples

59 Definition and examples Definition (L-statistics) Let X (1),..., X (n) be the order statistics of a sample of real-valued random variables. A linear combination of (transformed) order statistics n c ni a(x (i) ) is called L-statisic with coefficients c ni and score function a. i=1

60 Definition and examples Example (Trimmed and Winsorized means) (i) The α trimmed mean is the average of the middle (1 2α) th fraction of the observations: X T,α 1 n = n 2 αn n αn i=1+ αn X (i)

61 Definition and examples Example (Trimmed and Winsorized means) (i) The α trimmed mean is the average of the middle (1 2α) th fraction of the observations: X T,α 1 n = n 2 αn n αn i=1+ αn X (i) (ii) The α Winsorized mean replaces the αth fraction of smallest and largest data and next takes the average: X W,α n = 1 n [ αn X ( αn ) + n αn i=1+ αn X (i) + αn X (n αn +1) ]

62 Definition and examples Abbildung : Asymptotic variances of the α trimmed mean.

63 Definition and examples The order statistics can be expressed in their empirical distribution through F 1 n (p) = X ( pn ) = X (i) for i 1 < p i n n.

64 Definition and examples The order statistics can be expressed in their empirical distribution through F 1 n (p) = X ( pn ) = X (i) for i 1 < p i n n. Hence, we may hope to write the L-statistic in the form φ(f n) and next apply the delta method to derive the asymptotic distribution.

65 Definition and examples The order statistics can be expressed in their empirical distribution through F 1 n (p) = X ( pn ) = X (i) for i 1 < p i n n. Hence, we may hope to write the L-statistic in the form φ(f n) and next apply the delta method to derive the asymptotic distribution. For a fixed function a and a signed measure K on (0, 1), consider the function Hence, φ(f n) = φ(f ) = i=1 1 0 a(f 1 ) dk. n ( i 1 K n, i ] a(x (i) ), n which is an L-statistic with coefficients c ni = K ( i 1 n, i n ]. Not all, but most of the arrays of coefficients c ni can be generated through a measure K.

66 Definition and examples Example (Trimmed- and Winsorized mean) (i) Let a be the identity function and K the uniform distribution on (α, 1 α). The corresponding L-statistic is 1 α 1 F 1 n (s) ds 1 2α α 1 = ( αn αn)x ( αn ) + n 2αn n αn i=1+ αn X (i) + ( αn αn)x (n αn +1). The difference of this L-statistic and the α trimmed mean can be seen to be O P ( 1 n ).

67 Definition and examples Example (Trimmed- and Winsorized mean) (i) Let a be the identity function and K the uniform distribution on (α, 1 α). The corresponding L-statistic is 1 α 1 F 1 n (s) ds 1 2α α 1 = ( αn αn)x ( αn ) + n 2αn n αn i=1+ αn X (i) + ( αn αn)x (n αn +1). The difference of this L-statistic and the α trimmed mean can be seen to be O P ( 1 n ). (ii) For the α Winsorized mean K is the sum of the Lebesgue measure on (α, 1 α) and the pointmass of size α at the points α and 1 α. φ(f n) = 1 αx ( αn ) + n n αn i=1+ αn X (i) + (αx (n αn )+1 )

68 Definition and examples Consider a L Statistic of the form 1 0 a(f 1 ) dk. There are two ways to prove asymptotic normality by applying the delta method:

69 Definition and examples Consider a L Statistic of the form 1 0 a(f 1 ) dk. There are two ways to prove asymptotic normality by applying the delta method: (1) View the L-statistic as a function of the empirical quantiles φ(f 1 n ) = 1 0 a (F 1 n ) dk.

70 Definition and examples Consider a L Statistic of the form 1 0 a(f 1 ) dk. There are two ways to prove asymptotic normality by applying the delta method: (1) View the L-statistic as a function of the empirical quantiles φ(f 1 n ) = 1 0 a (F 1 n ) dk. (2) Express the L-statistic through the empirical process φ(f n) = 1 0 a (F 1 n ) dk.

71 Definition and examples Consider a L Statistic of the form 1 0 a(f 1 ) dk. There are two ways to prove asymptotic normality by applying the delta method: (1) View the L-statistic as a function of the empirical quantiles φ(f 1 n ) = 1 0 a (F 1 n ) dk. (2) Express the L-statistic through the empirical process φ(f n) = 1 0 a (F 1 n ) dk. Both approaches are valid under different sets of conditions on K, a and F. Often, we have to combine both approaches!

72 First approach Functional Definition and examples Goal: Apply the delta method to the sequence n(φ(f 1 n ) φ(f 1 )), where φ(f 1 n ) = 1 a 0 (F 1 n ) dk.

73 First approach Functional Definition and examples Goal: Apply the delta method to the sequence n(φ(f 1 n ) φ(f 1 )), where φ(f 1 n ) = 1 a 0 (F 1 n ) dk. Lemma Let a : R R be continously differentiable with a bounded derivative. Let K be a signed measure on the interval (α, β) (0, 1). Then the map Q a(q) dk from l (α, β) to R is Hadamard-differentiable at every Q with derivative H a (Q)H dk.

74 First approach Functional Definition and examples Goal: Apply the delta method to the sequence n(φ(f 1 n ) φ(f 1 )), where φ(f 1 n ) = 1 a 0 (F 1 n ) dk. Lemma Let a : R R be continously differentiable with a bounded derivative. Let K be a signed measure on the interval (α, β) (0, 1). Then the map Q a(q) dk from l (α, β) to R is Hadamard-differentiable at every Q with derivative H a (Q)H dk. Assumptions for asymptotic normality To ensure the convergence of the empirical quantile process in l (α, β), F has to have positive density between its α and β quantiles. This smoothness of F is unnecessary if we assume K is smooth.

75 Definition and examples Goal: Apply the delta method to the sequence n(φ(f 1 n ) φ(f 1 )), where φ(f n) = 1 a 0 (F 1 n ) dk.

76 Definition and examples Goal: Apply the delta method to the sequence n(φ(f 1 n ) φ(f 1 )), where φ(f n) = 1 a 0 (F 1 n ) dk. Lemma Let a : R R be of bounded variation on bounded intervals with (a + + a ) d K F < and a(0) = 0. Let K be a signed measure on (0, 1) whose distribution function K is differentiable at F (x) for almost-every x and satisfies K(u + h) K(u) M(u)h for every sufficiently small h, and some function M such that M(F ) d a <. Then, the map F a F 1 dk from DF [, ] D[, ] to R is Hadamard differentiable at F, with derivative H (K F )H da.

77 Definition and examples Goal: Apply the delta method to the sequence n(φ(f 1 n ) φ(f 1 )), where φ(f n) = 1 a 0 (F 1 n ) dk. Lemma Let a : R R be of bounded variation on bounded intervals with (a + + a ) d K F < and a(0) = 0. Let K be a signed measure on (0, 1) whose distribution function K is differentiable at F (x) for almost-every x and satisfies K(u + h) K(u) M(u)h for every sufficiently small h, and some function M such that M(F ) d a <. Then, the map F a F 1 dk from DF [, ] D[, ] to R is Hadamard differentiable at F, with derivative H (K F )H da. Assumptions for asymptotic normality We have to assume that K is sufficiently smooth but the lemma does not require that F is smooth.

78 Definition and examples Example (Trimmed mean) Consider the α trimmed mean, where K is the distribution function of the Uniform distribution on (α, 1 α). Assume that F has Lebesgue measure zero on the set {x : F (x) = α, or 1 α}. Then, the trimmed mean is asymptotically normal with zero-mean and variance 1 (1 2α) 2 F 1 (1 α) F 1 (1 α)) F 1 (α) F 1 (α) (F (x y) F (x)f (y)) dx dy

79 Definition and examples Example (Trimmed mean) Consider the α trimmed mean, where K is the distribution function of the Uniform distribution on (α, 1 α). Assume that F has Lebesgue measure zero on the set {x : F (x) = α, or 1 α}. Then, the trimmed mean is asymptotically normal with zero-mean and variance 1 (1 2α) 2 F 1 (1 α) F 1 (1 α)) F 1 (α) F 1 (α) (F (x y) F (x)f (y)) dx dy Example (Winsorized mean) The generating measure of the α Winsorized mean is the sum of the Lebesgue measure on (α, 1 α) and the pointmass of size α at the points α and 1 α. Hence, we can decompose K (and the Winsorized mean) in a discrete and a continous part. Combining the two approaches yields the asymptotic normality.

Theoretical Statistics. Lecture 19.

Theoretical Statistics. Lecture 19. Theoretical Statistics. Lecture 19. Peter Bartlett 1. Functional delta method. [vdv20] 2. Differentiability in normed spaces: Hadamard derivatives. [vdv20] 3. Quantile estimates. [vdv21] 1 Recall: Delta

More information

Asymptotic Distributions for the Nelson-Aalen and Kaplan-Meier estimators and for test statistics.

Asymptotic Distributions for the Nelson-Aalen and Kaplan-Meier estimators and for test statistics. Asymptotic Distributions for the Nelson-Aalen and Kaplan-Meier estimators and for test statistics. Dragi Anevski Mathematical Sciences und University November 25, 21 1 Asymptotic distributions for statistical

More information

Empirical Processes & Survival Analysis. The Functional Delta Method

Empirical Processes & Survival Analysis. The Functional Delta Method STAT/BMI 741 University of Wisconsin-Madison Empirical Processes & Survival Analysis Lecture 3 The Functional Delta Method Lu Mao lmao@biostat.wisc.edu 3-1 Objectives By the end of this lecture, you will

More information

Theoretical Statistics. Lecture 17.

Theoretical Statistics. Lecture 17. Theoretical Statistics. Lecture 17. Peter Bartlett 1. Asymptotic normality of Z-estimators: classical conditions. 2. Asymptotic equicontinuity. 1 Recall: Delta method Theorem: Supposeφ : R k R m is differentiable

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

7 Influence Functions

7 Influence Functions 7 Influence Functions The influence function is used to approximate the standard error of a plug-in estimator. The formal definition is as follows. 7.1 Definition. The Gâteaux derivative of T at F in the

More information

Efficiency of Profile/Partial Likelihood in the Cox Model

Efficiency of Profile/Partial Likelihood in the Cox Model Efficiency of Profile/Partial Likelihood in the Cox Model Yuichi Hirose School of Mathematics, Statistics and Operations Research, Victoria University of Wellington, New Zealand Summary. This paper shows

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

STAT Sample Problem: General Asymptotic Results

STAT Sample Problem: General Asymptotic Results STAT331 1-Sample Problem: General Asymptotic Results In this unit we will consider the 1-sample problem and prove the consistency and asymptotic normality of the Nelson-Aalen estimator of the cumulative

More information

Exercises. (a) Prove that m(t) =

Exercises. (a) Prove that m(t) = Exercises 1. Lack of memory. Verify that the exponential distribution has the lack of memory property, that is, if T is exponentially distributed with parameter λ > then so is T t given that T > t for

More information

Convergence in Distribution

Convergence in Distribution Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Strong approximations for resample quantile processes and application to ROC methodology

Strong approximations for resample quantile processes and application to ROC methodology Strong approximations for resample quantile processes and application to ROC methodology Jiezhun Gu 1, Subhashis Ghosal 2 3 Abstract The receiver operating characteristic (ROC) curve is defined as true

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

Exercises in Extreme value theory

Exercises in Extreme value theory Exercises in Extreme value theory 2016 spring semester 1. Show that L(t) = logt is a slowly varying function but t ǫ is not if ǫ 0. 2. If the random variable X has distribution F with finite variance,

More information

Chapter 7 Statistical Functionals and the Delta Method

Chapter 7 Statistical Functionals and the Delta Method Chapter 7 Statistical Functionals and the Delta Method. Estimators as Functionals of F n or P n 2. Continuity of Functionals of F or P 3. Metrics for Distribution Functions F and Probability Distributions

More information

Precise Asymptotics of Generalized Stochastic Order Statistics for Extreme Value Distributions

Precise Asymptotics of Generalized Stochastic Order Statistics for Extreme Value Distributions Applied Mathematical Sciences, Vol. 5, 2011, no. 22, 1089-1102 Precise Asymptotics of Generalied Stochastic Order Statistics for Extreme Value Distributions Rea Hashemi and Molood Abdollahi Department

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 02: Overview Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations Research

More information

Lecture 16: Sample quantiles and their asymptotic properties

Lecture 16: Sample quantiles and their asymptotic properties Lecture 16: Sample quantiles and their asymptotic properties Estimation of quantiles (percentiles Suppose that X 1,...,X n are i.i.d. random variables from an unknown nonparametric F For p (0,1, G 1 (p

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions

More information

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics Chapter 6 Order Statistics and Quantiles 61 Extreme Order Statistics Suppose we have a finite sample X 1,, X n Conditional on this sample, we define the values X 1),, X n) to be a permutation of X 1,,

More information

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15. IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2011, Professor Whitt Class Lecture Notes: Thursday, September 15. Random Variables, Conditional Expectation and Transforms 1. Random

More information

ST745: Survival Analysis: Nonparametric methods

ST745: Survival Analysis: Nonparametric methods ST745: Survival Analysis: Nonparametric methods Eric B. Laber Department of Statistics, North Carolina State University February 5, 2015 The KM estimator is used ubiquitously in medical studies to estimate

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Lecture Characterization of Infinitely Divisible Distributions

Lecture Characterization of Infinitely Divisible Distributions Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

Measure-theoretic probability

Measure-theoretic probability Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

The Poisson boundary of certain Cartan-Hadamard manifolds of unbounded curvature

The Poisson boundary of certain Cartan-Hadamard manifolds of unbounded curvature The Poisson boundary of certain Cartan-Hadamard manifolds of unbounded curvature University of Luxembourg Workshop on Boundaries Graz University of Technology June 29 July 4, 2009 Reference Marc Arnaudon,

More information

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf Lecture 13: 2011 Bootstrap ) R n x n, θ P)) = τ n ˆθn θ P) Example: ˆθn = X n, τ n = n, θ = EX = µ P) ˆθ = min X n, τ n = n, θ P) = sup{x : F x) 0} ) Define: J n P), the distribution of τ n ˆθ n θ P) under

More information

Analysis Qualifying Exam

Analysis Qualifying Exam Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

1 Weak Convergence in R k

1 Weak Convergence in R k 1 Weak Convergence in R k Byeong U. Park 1 Let X and X n, n 1, be random vectors taking values in R k. These random vectors are allowed to be defined on different probability spaces. Below, for the simplicity

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Graduate Econometrics I: Maximum Likelihood I

Graduate Econometrics I: Maximum Likelihood I Graduate Econometrics I: Maximum Likelihood I Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood

More information

NONPARAMETRIC CONFIDENCE INTERVALS FOR MONOTONE FUNCTIONS. By Piet Groeneboom and Geurt Jongbloed Delft University of Technology

NONPARAMETRIC CONFIDENCE INTERVALS FOR MONOTONE FUNCTIONS. By Piet Groeneboom and Geurt Jongbloed Delft University of Technology NONPARAMETRIC CONFIDENCE INTERVALS FOR MONOTONE FUNCTIONS By Piet Groeneboom and Geurt Jongbloed Delft University of Technology We study nonparametric isotonic confidence intervals for monotone functions.

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

MIT Spring 2015

MIT Spring 2015 MIT 18.443 Dr. Kempthorne Spring 2015 MIT 18.443 1 Outline 1 MIT 18.443 2 Batches of data: single or multiple x 1, x 2,..., x n y 1, y 2,..., y m w 1, w 2,..., w l etc. Graphical displays Summary statistics:

More information

1 Glivenko-Cantelli type theorems

1 Glivenko-Cantelli type theorems STA79 Lecture Spring Semester Glivenko-Cantelli type theorems Given i.i.d. observations X,..., X n with unknown distribution function F (t, consider the empirical (sample CDF ˆF n (t = I [Xi t]. n Then

More information

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 21, 2003, 211 226 SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS Massimo Grosi Filomena Pacella S.

More information

Efficient and Robust Scale Estimation

Efficient and Robust Scale Estimation Efficient and Robust Scale Estimation Garth Tarr, Samuel Müller and Neville Weber School of Mathematics and Statistics THE UNIVERSITY OF SYDNEY Outline Introduction and motivation The robust scale estimator

More information

Stochastic Convergence, Delta Method & Moment Estimators

Stochastic Convergence, Delta Method & Moment Estimators Stochastic Convergence, Delta Method & Moment Estimators Seminar on Asymptotic Statistics Daniel Hoffmann University of Kaiserslautern Department of Mathematics February 13, 2015 Daniel Hoffmann (TU KL)

More information

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). 1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need

More information

Notes on Distributions

Notes on Distributions Notes on Distributions Functional Analysis 1 Locally Convex Spaces Definition 1. A vector space (over R or C) is said to be a topological vector space (TVS) if it is a Hausdorff topological space and the

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

Gaussian Random Field: simulation and quantification of the error

Gaussian Random Field: simulation and quantification of the error Gaussian Random Field: simulation and quantification of the error EMSE Workshop 6 November 2017 3 2 1 0-1 -2-3 80 60 40 1 Continuity Separability Proving continuity without separability 2 The stationary

More information

Product-limit estimators of the survival function with left or right censored data

Product-limit estimators of the survival function with left or right censored data Product-limit estimators of the survival function with left or right censored data 1 CREST-ENSAI Campus de Ker-Lann Rue Blaise Pascal - BP 37203 35172 Bruz cedex, France (e-mail: patilea@ensai.fr) 2 Institut

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

P-adic Functions - Part 1

P-adic Functions - Part 1 P-adic Functions - Part 1 Nicolae Ciocan 22.11.2011 1 Locally constant functions Motivation: Another big difference between p-adic analysis and real analysis is the existence of nontrivial locally constant

More information

1 Continuity Classes C m (Ω)

1 Continuity Classes C m (Ω) 0.1 Norms 0.1 Norms A norm on a linear space X is a function : X R with the properties: Positive Definite: x 0 x X (nonnegative) x = 0 x = 0 (strictly positive) λx = λ x x X, λ C(homogeneous) x + y x +

More information

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012 Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 202 BOUNDS AND ASYMPTOTICS FOR FISHER INFORMATION IN THE CENTRAL LIMIT THEOREM

More information

11 Survival Analysis and Empirical Likelihood

11 Survival Analysis and Empirical Likelihood 11 Survival Analysis and Empirical Likelihood The first paper of empirical likelihood is actually about confidence intervals with the Kaplan-Meier estimator (Thomas and Grunkmeier 1979), i.e. deals with

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Robust Inference. A central concern in robust statistics is how a functional of a CDF behaves as the distribution is perturbed.

Robust Inference. A central concern in robust statistics is how a functional of a CDF behaves as the distribution is perturbed. Robust Inference Although the statistical functions we have considered have intuitive interpretations, the question remains as to what are the most useful distributional measures by which to describe a

More information

STAT 6385 Survey of Nonparametric Statistics. Order Statistics, EDF and Censoring

STAT 6385 Survey of Nonparametric Statistics. Order Statistics, EDF and Censoring STAT 6385 Survey of Nonparametric Statistics Order Statistics, EDF and Censoring Quantile Function A quantile (or a percentile) of a distribution is that value of X such that a specific percentage of the

More information

The Central Limit Theorem Under Random Truncation

The Central Limit Theorem Under Random Truncation The Central Limit Theorem Under Random Truncation WINFRIED STUTE and JANE-LING WANG Mathematical Institute, University of Giessen, Arndtstr., D-3539 Giessen, Germany. winfried.stute@math.uni-giessen.de

More information

Nonlife Actuarial Models. Chapter 14 Basic Monte Carlo Methods

Nonlife Actuarial Models. Chapter 14 Basic Monte Carlo Methods Nonlife Actuarial Models Chapter 14 Basic Monte Carlo Methods Learning Objectives 1. Generation of uniform random numbers, mixed congruential method 2. Low discrepancy sequence 3. Inversion transformation

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Measure and Integration: Solutions of CW2

Measure and Integration: Solutions of CW2 Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost

More information

Sobolev Spaces. Chapter 10

Sobolev Spaces. Chapter 10 Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p

More information

Goodness-of-fit tests for the cure rate in a mixture cure model

Goodness-of-fit tests for the cure rate in a mixture cure model Biometrika (217), 13, 1, pp. 1 7 Printed in Great Britain Advance Access publication on 31 July 216 Goodness-of-fit tests for the cure rate in a mixture cure model BY U.U. MÜLLER Department of Statistics,

More information

************************************* Applied Analysis I - (Advanced PDE I) (Math 940, Fall 2014) Baisheng Yan

************************************* Applied Analysis I - (Advanced PDE I) (Math 940, Fall 2014) Baisheng Yan ************************************* Applied Analysis I - (Advanced PDE I) (Math 94, Fall 214) by Baisheng Yan Department of Mathematics Michigan State University yan@math.msu.edu Contents Chapter 1.

More information

u xx + u yy = 0. (5.1)

u xx + u yy = 0. (5.1) Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics ON SIMULTANEOUS APPROXIMATION FOR CERTAIN BASKAKOV DURRMEYER TYPE OPERATORS VIJAY GUPTA, MUHAMMAD ASLAM NOOR AND MAN SINGH BENIWAL School of Applied

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

Probability Distribution And Density For Functional Random Variables

Probability Distribution And Density For Functional Random Variables Probability Distribution And Density For Functional Random Variables E. Cuvelier 1 M. Noirhomme-Fraiture 1 1 Institut d Informatique Facultés Universitaires Notre-Dame de la paix Namur CIL Research Contact

More information

Distance between multinomial and multivariate normal models

Distance between multinomial and multivariate normal models Chapter 9 Distance between multinomial and multivariate normal models SECTION 1 introduces Andrew Carter s recursive procedure for bounding the Le Cam distance between a multinomialmodeland its approximating

More information

Goodness-of-Fit Testing with Empirical Copulas

Goodness-of-Fit Testing with Empirical Copulas with Empirical Copulas Sami Umut Can John Einmahl Roger Laeven Department of Econometrics and Operations Research Tilburg University January 19, 2011 with Empirical Copulas Overview of Copulas with Empirical

More information

Statistics. Statistics

Statistics. Statistics The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,

More information

Stochastic Comparisons of Order Statistics from Generalized Normal Distributions

Stochastic Comparisons of Order Statistics from Generalized Normal Distributions A^VÇÚO 1 33 ò 1 6 Ï 2017 c 12 Chinese Journal of Applied Probability and Statistics Dec. 2017 Vol. 33 No. 6 pp. 591-607 doi: 10.3969/j.issn.1001-4268.2017.06.004 Stochastic Comparisons of Order Statistics

More information

Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space

Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space Mathematica Moravica Vol. 19-1 (2015), 95 105 Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space M.R. Yadav Abstract. In this paper, we introduce a new two-step iteration process to approximate

More information

Functional Analysis

Functional Analysis The Hahn Banach Theorem : Functional Analysis 1-9-06 Extensions of Linear Forms and Separation of Convex Sets Let E be a vector space over R and F E be a subspace. A function f : F R is linear if f(αx

More information

Density estimators for the convolution of discrete and continuous random variables

Density estimators for the convolution of discrete and continuous random variables Density estimators for the convolution of discrete and continuous random variables Ursula U Müller Texas A&M University Anton Schick Binghamton University Wolfgang Wefelmeyer Universität zu Köln Abstract

More information

Upper and lower bounds for ruin probability

Upper and lower bounds for ruin probability Upper and lower bounds for ruin probability E. Pancheva,Z.Volkovich and L.Morozensky 3 Institute of Mathematics and Informatics, the Bulgarian Academy of Sciences, 3 Sofia, Bulgaria pancheva@math.bas.bg

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Notes 9 : Infinitely divisible and stable laws

Notes 9 : Infinitely divisible and stable laws Notes 9 : Infinitely divisible and stable laws Math 733 - Fall 203 Lecturer: Sebastien Roch References: [Dur0, Section 3.7, 3.8], [Shi96, Section III.6]. Infinitely divisible distributions Recall: EX 9.

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Econ Lecture 3. Outline. 1. Metric Spaces and Normed Spaces 2. Convergence of Sequences in Metric Spaces 3. Sequences in R and R n

Econ Lecture 3. Outline. 1. Metric Spaces and Normed Spaces 2. Convergence of Sequences in Metric Spaces 3. Sequences in R and R n Econ 204 2011 Lecture 3 Outline 1. Metric Spaces and Normed Spaces 2. Convergence of Sequences in Metric Spaces 3. Sequences in R and R n 1 Metric Spaces and Metrics Generalize distance and length notions

More information

1 Hypothesis testing for a single mean

1 Hypothesis testing for a single mean This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

Continuous Random Variables. and Probability Distributions. Continuous Random Variables and Probability Distributions ( ) ( ) Chapter 4 4.

Continuous Random Variables. and Probability Distributions. Continuous Random Variables and Probability Distributions ( ) ( ) Chapter 4 4. UCLA STAT 11 A Applied Probability & Statistics for Engineers Instructor: Ivo Dinov, Asst. Prof. In Statistics and Neurology Teaching Assistant: Christopher Barr University of California, Los Angeles,

More information

Stochastic Calculus. Kevin Sinclair. August 2, 2016

Stochastic Calculus. Kevin Sinclair. August 2, 2016 Stochastic Calculus Kevin Sinclair August, 16 1 Background Suppose we have a Brownian motion W. This is a process, and the value of W at a particular time T (which we write W T ) is a normally distributed

More information

On Some Estimates of the Remainder in Taylor s Formula

On Some Estimates of the Remainder in Taylor s Formula Journal of Mathematical Analysis and Applications 263, 246 263 (2) doi:.6/jmaa.2.7622, available online at http://www.idealibrary.com on On Some Estimates of the Remainder in Taylor s Formula G. A. Anastassiou

More information

Problem Set 5: Solutions Math 201A: Fall 2016

Problem Set 5: Solutions Math 201A: Fall 2016 Problem Set 5: s Math 21A: Fall 216 Problem 1. Define f : [1, ) [1, ) by f(x) = x + 1/x. Show that f(x) f(y) < x y for all x, y [1, ) with x y, but f has no fixed point. Why doesn t this example contradict

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

EMPIRICAL LIKELIHOOD AND DIFFERENTIABLE FUNCTIONALS

EMPIRICAL LIKELIHOOD AND DIFFERENTIABLE FUNCTIONALS University of Kentucky UKnowledge Theses and Dissertations--Statistics Statistics 2016 EMPIRICAL LIKELIHOOD AND DIFFERENTIABLE FUNCTIONALS Zhiyuan Shen University of Kentucky, alanshenpku10@gmail.com Digital

More information

(B(t i+1 ) B(t i )) 2

(B(t i+1 ) B(t i )) 2 ltcc5.tex Week 5 29 October 213 Ch. V. ITÔ (STOCHASTIC) CALCULUS. WEAK CONVERGENCE. 1. Quadratic Variation. A partition π n of [, t] is a finite set of points t ni such that = t n < t n1

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Estimation of the functional Weibull-tail coefficient

Estimation of the functional Weibull-tail coefficient 1/ 29 Estimation of the functional Weibull-tail coefficient Stéphane Girard Inria Grenoble Rhône-Alpes & LJK, France http://mistis.inrialpes.fr/people/girard/ June 2016 joint work with Laurent Gardes,

More information

Limit Theorems for Quantile and Depth Regions for Stochastic Processes

Limit Theorems for Quantile and Depth Regions for Stochastic Processes Limit Theorems for Quantile and Depth Regions for Stochastic Processes James Kuelbs and Joel Zinn Abstract. Since contours of multi-dimensional depth functions often characterize the distribution, it has

More information

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question A1 a) The marginal cdfs of F X,Y (x, y) = [1 + exp( x) + exp( y) + (1 α) exp( x y)] 1 are F X (x) = F X,Y (x, ) = [1

More information

Irr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland

Irr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland Frederick James CERN, Switzerland Statistical Methods in Experimental Physics 2nd Edition r i Irr 1- r ri Ibn World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI HONG KONG TAIPEI CHENNAI CONTENTS

More information