Statistics 581, Problem Set 8 Solutions Wellner; 11/22/2018

Size: px
Start display at page:

Download "Statistics 581, Problem Set 8 Solutions Wellner; 11/22/2018"

Transcription

1 Statistics 581, Problem Set 8 Solutions Wellner; 11// (a) Show that if θ n = cn 1/ and T n is the Hodges super-efficient estimator discussed in class, then the sequence n(t n θ n )} is uniformly square-integrable. (b) Let R n (θ) ne θ (T n θ) where T n is the Hodges superefficient estimator as in Example (so T n = δ n of Example.5, Lehmann and Casella pages ). Show that R n (n 1/4 ) as n. Solution: (a) First recall that (with δ n = T n ) since n(x θ) d = Z N(0, 1) we can write n(tn θ) = n(x n 1 [ Xn >n 1/4 ] + ax n1 [ Xn n 1/4 ] θ) d = Z1 [ Z+θ n >n 1/4 ] + [az + nθ(a 1)]1 [ Z+θ n n 1/4 ] = Z + [(a 1)Z + (a 1) nθ]1 [ Z+θ n n 1/4 ] = Z (1 a)[z + nθ]1 [ Z+θ n n 1/4 ]. Thus (as we showed in class) when θ n = cn 1/ we have n(tn θ n ) d = Z1 [ Z+c >n 1/4 ] + [az + c(a 1)]1 [ Z+c n 1/4 ] = Z + [(a 1)Z + (a 1)c]1 [ Z+c n 1/4 ] = Z (1 a)[z + c]1 [ Z+c n ]. 1/4 (1) Thus where Thus Y n n(t n θ n ) } d = Z (1 a)[z + c]1 [ Z+c n 1/4 ] ( Z + (1 a) (Z + c) ) Y E(Y ) = ( E(Z ) + (1 a) E(Z + c) ) <. } lim sup EY n 1 [Yn λ]} EY 1 [Y λ] } 0 n as λ. Hence Y n } is uniformly integrable; that is, n(t n θ n )} is uniformly square-integrable. (b) (a ) Note that the identity (1) in (a) above holds. Thus b n (θ) = E θ (T n ) θ = n 1/ EZ (1 a)e[z + } nθ]1 [ Z+θ n n 1/4 ] = 1 a E[Z + nθ]1 n [ Z+θ n n 1/4 ] = 1 a n n 1/4 n 1/4 xφ(x nθ)dx 1

2 since Z + θ n N(θ n, 1). (b ) Differentiating the result in (a ) gives b n(θ) = 1 a n n 1/4 n 1/4 xφ (x nθ)( n) dx n 1/4 = (1 a) x(x nθ)φ(x nθ) dx n 1/4 0 if θ 0 since φ (x) = xφ(x) by the dominated convergence theorem since x(x nθ)φ(x nθ)1 [ n 1/4,n 1/4 ](x) 0 for each fixed x and is dominated by the integrable function 4e 1 φ(x)/( θ 1) (for n (3/ θ ) 4 ). Details of this domination: For x n 1/4 it follows that while x x nθ n 1/4 n 1/4 nθ n 1/ + n 3/4 θ n 3/4 ( θ 1) φ(x nθ) = φ(x) exp( nθx nθ /) φ(x) exp( θ n 3/4 nθ /) = φ(x) exp( θ n 3/4 (1 n 1/4 θ /)) φ(x) exp( 1 θ n3/4 ) if 1 n 1/4 θ / < 1/ or, equivalently, when n > (3/ θ ) 4. Combining these two bounds yields x x nθ φ(x nθ) φ(x)n 3/4 ( θ 1) exp( θ n 3/4 /) n = φ(x) 3/4 exp( θ n 3/4 /) if θ < 1 n 3/4 θ exp( θ n 3/4 /) if θ 1 (4/ θ )(n = φ(x) 3/4 θ /) exp( θ n 3/4 /) if θ < 1 4(n 3/4 θ /) exp( θ n 3/4 /) if θ 1 4e 1 θ 1 φ(x). (b), Second (more elegant) solution: from the lecture notes, 3.3 (3), it follows that R n (θ) = E[n(T n θ) ] = nv ar[t n ] + nb n (θ) a + nb n (θ). Using the formula for b n (θ) from part (a) above, it follows that it is enough to show that n 1/4 xφ(x n 1/4 )dx n. 1/4 But we have, with Z N(0, 1) (and hence E Z < ), n 1/4 xφ(x n 1/4 0 )dx = n (y + n 1/4 )φ(y)dy 1/4 n 1/4

3 0 0 n1/4 φ(y)dy yφ(y)dy n 1/4 n 1/4 n 1/4 (Φ(0) Φ( n 1/4 )) E Z.. (Super-efficiency at two parameter values) Suppose that X 1,..., X n are i.i.d. N(θ, 1) where θ R) Let a, b [0, 1) and define the estimator T n as follows: X n if X n 1 > n 1/4 and X n + 1 > n 1/4, T n = ax n + (1 a) if X n 1 n 1/4, bx n + (1 b)( 1) if X n + 1 n 1/4. (a) Find the limiting distribution of n(t n θ) when: (i) θ 1 and θ 1; (ii) θ = 1; (iii) θ = 1. (b) Find the limiting distribution of n(t n θ n ) when: (i) θ n = 1 + cn 1/ ; (ii) θ n = 1 + cn 1/. (c) Could we have super-efficiency at a countable collection of parameter values? Solution: (a) Note that n(x n θ) d = Z N(0, 1) for all θ R and n N. Thus we find that n(tn θ) = n(x n θ)1 [ Xn 1 >n 1/4 ] 1 [ X n+1 >n 1/4 ] where + n(ax n + (1 a) θ)1 [ Xn 1 n 1/4 ] + n(ax n (1 b) θ)1 [ Xn+1 n 1/4 ] d = Z 1 [ n Xn θ+θ 1 >n 1/4 ] 1 [ n X n θ+θ+1 >n 1/4 ] d + a n(x n θ) + n(aθ θ + (1 a)) } 1 [ n(xn θ)+ n(θ 1) n 1/4 ] + b n(x n θ) + n(bθ θ (1 b)) } 1 [ n(xn θ)+ n(θ+1) n 1/4 ] Z if θ 1, θ 1, az if θ = 1, bz if θ = 1, N(0, V (θ)) V (θ) = 1 1,1} (θ) + a 1 1} (θ) + b 1 1} (θ). (b) If θ = θ n = 1 + cn 1/, d n(tn θ n ) = Z1 [ Z+c >n 1/4 ] + (az + c(a 1))1 [ Z+c n 1/4 ] + o p (1) d az + c(a 1) N(c(a 1), a ). In the same way, if θ = θ n = 1 + cn 1/, we find that n(tn θ n ) d bz + c(b 1) N(c(b 1), b ). (c) A similar construction works to yield superefficiency at all θ Z = 0, ±1, ±,...}. 3

4 3. Suppose that X 1,..., X n are i.i.d. with distribution function F having a continuous density function f. Let F n be the empirical distribution function of the X i s, suppose that b n is a sequence of positive numbers, and let ˆf n (x) = F n(x + b n ) F n (x b n ) b n. (a) Compute E ˆf n (x)} and V ar( ˆf n (x)). (b) Show that E ˆf n (x) f(x) if b n 0. (c) Show that V ar( ˆf n (x)) 0 if b n 0 and nb n. (d) Use some appropriate central limit theorem to show that (perhaps under some suitable further conditions that you might need to specify) nbn ( ˆf n (x) E ˆf n (x)) d N(0, f(x)). Hint: Write ˆf n (x) in terms of some Bernoulli random variables and identify p = p n. Solution: (a) First note that nb n = n(f n (x+b n ) F n (x b n )) is a Binomial(n, p n ) random variable with p n = F (x + b n ) F (x b n ). Hence if b n 0 E f n (x) = F (x + b n) F (x b n ) = p n b n nb n = 1 F (x + bn ) F (x) b n + F (x) F (x b n) b n } 1 f(x) + f(x)} = f(x). (b) Furthermore V ar( f n (x)) = np n(1 p n ) (nb n ) 1 p n = (1 p n ) nb n b n 0 f(x) 1 = 0 if nb n and b n 0. (c) Since nb n fn (x) = n i=1 X ni where X ni Bernoulli(p n ), it follows that σni = p n (1 p n ) so that σn = Var( n i=1 X ni) = np n (1 p n ), and γ n n γ ni = i=1 n E X ni µ ni 3 i=1 = np n (1 p n )(1 p n ) + p n} np n (1 p n ) so that γ n /σ 3 npn (1 p n ) = nbn (p n /b n )(1 p n ) 0 4

5 if b n 0 and nb n. Thus, by the Liapunov CLT, nb n ( f n (x) E f(x)) npn (1 p n ) N(0, 1) if b n 0 and nb n. Thus nbn ( f n (x) E f n (x)) = nb n( f n (x) E f n (x)) npn (1 p n ) N(0, 1) f(x) = N(0, f(x)). np n (1 p n ) nb n 4. Suppose that (T Z) Weibull(λ 1 e γz, β), and Z G η on R with density g η with respect to some dominating measure µ. Thus the conditional cumulative hazard function Λ(t z) is given by and hence (Recall that λ(t) = f(t)/(1 F (t)) and Λ(t) t 0 Λ γ,λ,β (t z) = (λe γz t) β = λ β e βγz t β λ(s)ds = λ γ,λ,β (t z) = λ β e βγz βt β 1. t 0 (1 F (s)) 1 df (s) = log(1 F (t)) if F is continuous.) Thus it makes sense to re-parametrize by defining θ 1 βγ (this is the parameter of interest since it reflects the effect of the covariate Z), θ λ β, and θ 3 β. This yields You may assume that λ θ (t z) = θ 3 θ exp(θ 1 z)t θ 3 1 a(z) ( / η) log g η (z) exists and Ea (Z)} <. Thus Z is a covariate or predictor variable, θ 1 is a regression parameter which affects the intensity of the (conditionally) Weibull variable T, and θ = (θ 1, θ, θ 3, θ 4 ) where θ 4 η. (a) Derive the joint density p θ (t, z) of (T, Z) for the re-parametrized model. (b) Find the information matrix for θ. What does the structure of this matrix say about the effect of η = θ 4 being known or unknown about the estimation of θ 1, θ, θ 3? (c) Find the information and information bound for θ 1 if the parameters θ and θ 3 are known. (d) What is the information bound for θ 1 if just θ 3 is known to be equal to 1? (e) Find the efficient score function and the efficient influence function for estimation of θ 1 when θ 3 is known. (f) Find the information I 11 (,3) and information bound for θ 1 if the parameters θ and θ 3 are unknown. (Here both θ and θ 3 are in the second block.) (g) Find the efficient score function and the efficient influence function for estimation 5

6 of θ 1 when θ and θ 3 are unknown. (h) Specialize the calculations in (d) - (g) to the case when Z Bernoulli(θ 4 ) and compare the information bounds. Solution: (a) Integrating λ θ (t z) with respect to t gives Λ θ (t z) = θ exp(θ 1 z)t θ 3, and hence the conditional survival function 1 F θ (t z) is given by 1 F θ (t z) = exp( Λ θ (t z)) = exp( θ exp(θ 1 z)t θ 3 ). () It follows that and hence that f θ (t z) = θ θ 3 e θ 1z t θ 3 1 exp( θ e θ 1z t θ 3 ), p θ (y, z) = f θ (y z)g η (z) = θ θ 3 e θ 1z t θ 3 1 exp( θ e θ 1z t θ 3 )g η (z) = = θ θ 3 e θ 1z t θ 3 1 exp( θ e θ 1z t θ 3 )g θ4 (z). (b) We first calculate the scores for θ. Note that the random variable W θ exp(θ 1 Z)Y θ 3 has, conditionally on Z, a standard Exponential(1) distribution: by (). We calculate l(θ Y, Z) = log p θ (Y, Z) P θ (W > w Z) = P θ (θ exp(θ 1 Z)Y θ 3 > w Z) = e w = log θ + log θ 3 + θ 1 Z + (θ 3 1) log Y θ e θ1z Y θ 3 + log g θ4 (Z), l 1 (Y, Z) = Z Zθ e θ1z Y θ 3 = Z(1 W ), l (Y, Z) = 1 θ θ e θ 1Z Y θ 3 θ = 1 θ (1 W ), l 3 (Y, Z) = 1 θ 3 + log Y θ e θ 1Z Y θ 3 log Y = 1 + log Y 1 θ e θ1z Y θ 3 } θ 3 = log θ } e θ1z Y θ 3 θ 3 θ e 1 W } θ 1Z = 1 θ log W log(θ e θ 1Z )}1 W } } = 1 θ 3 [1 (W 1) log W ] + (W 1) log(θ e θ 1Z ) } l 4 (Y, Z) = a(z) = a(z, η). Moreover, l13 (Y, Z) = Zθ e θ1z Y θ 3 log Y = Z 1 ( ) θ e θ1z Y θ θ 3 e θ1z Y θ 3 log θ 3 θ e θ 1Z = Z θ 3 W log W log(θ e θ 1Z )} 6

7 = z θ 3 W log W log(θ ) θ 1 Z} l3 (Y, Z) = e θ 1Z Y θ 3 log Y = 1 θ θ 3 θ e θ 1Z Y θ 3 log = 1 θ θ 3 W log W log(θ e θ 1Z )} = 1 θ θ 3 W log W log(θ ) θ 1 Z}, l33 (Y, Z) = 1 θ W [log W log(θ e θ 1Z ] }. ( ) θ e θ1z Y θ 3 θ e θ 1Z Thus we calculate easily: I 11 (θ) = E θ ( l 1 (Y, Z) ) = E θ E[Z (1 W ) Z]} = EZ E[(1 W ) Z]} = E(Z ), I (θ) = E θ ( l (Y, Z) ) = E θ E[θ (1 W ) Z]} = θ, I 33 (θ) = θ3 1 + E[W (log W ) ] E(W log W )log θ + θ 1 E(Z)} + E(log θ + θ 1 Z) } } = θ3 1 + B Alog θ + θ 1 E(Z)} + E(log θ + θ 1 Z) } } I 1 (θ) = E θ ( l 1 (Y, Z) l (Y, Z)) = E θ E[Zθ 1 (1 W ) Z]} = θ 1 E(Z), I 13 (θ) = E θ l 13 (Y, Z)} = θ3 1 E(Z)[A log θ ] θ 1 E(Z )}, I 3 (θ) = E θ l 3 (Y, Z)} = (θ θ 3 ) 1 A log θ θ 1 E(Z)} where A EW log W } = 0 (w log w) exp( w)dw = 1 γ, B EW (log W ) } = π /6 + (1 γ) 1. Note that since l 4 (y, z) = a(z) is just a function of Z, it follows easily that for j = 1,, 3 we also have I j4 (θ) = E θ l j (Y, Z) l 4 (Y, Z)} = Eg j (W, Z)a(Z)} = EE[g j (W, Z)a(Z) Z]} = Ea(Z)E[g j (W, Z) Z]} = Ea(Z) 0} = 0, Because of this orthogonality, the information bounds for (θ 1, θ, θ 3 ) are the same when θ 4 = η is unknown as when it is known. (c) If θ and θ 3 are known, then the information bound for estimation of θ 1 is just I11 1 (θ) = 1/E(Z ). It follows that the information matrix for θ is of the following form: E(Z ) θ 1 E(Z) θ3 1 C 0 I(θ) = θ 1 E(Z) θ (θ θ 3 ) 1 D 0 θ3 1 C (θ θ 3 ) 1 D θ3 E Ea (Z) 7

8 where C = E(Z)(A log θ ) θ 1 E(Z ) D = A log θ θ 1 E(Z) E = 1 + B A(log θ + θ 1 E(Z)) + E(log θ + θ 1 Z). (d) If θ 3 = 1 is known, then the information bound for θ 1 is I 1 11 where I 11 (θ) = I 11 I 1 I 1 I 1 = E(Z ) (E(Z)/θ ) θ = E(Z ) (EZ) = V ar(z). Thus I 1 11 = 1/V ar(z). (e) When θ 3 is known, the efficient score function and the efficient influence function for estimation of θ 1 are given by and l 1(Y, Z) = l 1 I 1 I 1 l = Z(1 W ) θ 1 E(Z)θ 1 (1 W ) θ = Z(1 W ) E(Z)(1 W ) = (Z E(Z))(1 W ), l1 (Y, Z) = I11 l 1 1(Y, Z) 1 = (Z E(Z))(1 W ). V ar(z) (f) When both the parameters θ and θ 3 are unknown, the information I 11 (,3) is given by where I 1 (,3) I 11 where the second block contains both θ, θ 3 = I 11 I 1 I 1 I 1 (3) I 1 = (θ 1 E(Z), θ3 1 C), ( ) θ I 1 = E θ θ 3 D 1 θ θ 3 D θ3 E D. Thus the second term in (3 ) is [E(Z)] E E(Z)CD + C } /(E D ). (4) Now the denominator is E D = 1 + B A(log θ + θ 1 E(Z)) + E(log θ + θ 1 Z) (A log θ θ 1 E(Z)) = 1 + B A(log θ + θ 1 E(Z)) + E(log θ + θ 1 Z) [A A(log θ + θ 1 E(Z)) + (log θ + θ 1 E(Z)) = 1 + B A + V ar[log θ + θ 1 Z] = π /6 + θ 1V ar(z), 8

9 and, upon noting that C E(Z)D = E(Z)(A log θ ) θ 1 E(Z ) E(Z)(A log θ ) θ 1 [E(Z)] } = θ 1 V ar(z), it follows that the numerator of (4) is C E(Z)CD + [E(Z)] E = C E(Z)CD + [E(Z)] D + [E(Z)] (E D ) = (C E(Z)D) + [E(Z)] π /6 + θ 1V ar(z)} = θ 1[V ar(z)] + [E(Z)] π /6 + θ 1V ar(z)}. It follows that the information for θ 1 when θ and θ 3 are unknown is equal to I 11 (,3) = E(Z ) [E(Z)] π /6 + θ1v ar(z)} π /6 + θ1v ar(z) = π /6 π /6 + θ1v ar(z) V ar(z) V ar(z) E(Z ) with equality in the first inequality if and only if θ 1 = 0. Note that the information decreases as θ 1 increases, and it converges to π /(6θ1) as V ar(z). (g) When θ and θ 3 are unknown the efficient score function for θ 1 is, with the second block containing both θ and θ 3, l 1 = l 1 I 1 I 1 l = l 1 (θ (E(Z)E CD), θ 3 (C DE(Z)) l /(E D ) = E(Z)E CD Z(1 W ) (1 W ) E D = θ 1 V ar(z) π /6 + θ 1V ar(z) + Z E(Z)E CD + log(θ e θ1z ) π /6 + θ1v ar(z) + [1 (W 1) log W ] + (W 1) log(θ e θ 1Z ) } } (1 W ) θ1v ar(z) 1 (W 1) log W }. π /6 + θ1v ar(z) (h) When Z Bernoulli(η), then I 11 = E(Z ) = η = θ 4, I 11 = V ar(z) = η(1 η) = θ 4 (1 θ 4 ), π /6 I 11 (,3) = π /6 + θ1v ar(z) V ar(z) = π /6 η(1 η). π /6 + θ1η(1 η) The corresponding information bounds are given by the reciprocals of these quantities. See the following figures for comparisons of the information and information bounds. 9

10 Figure 1: Plots of I 11, I 11, and I 11 (,3) as a function of η = θ 4, and for θ 1 =.5, 1.0, Figure : Plots of I 1 11, I 1 11, and I 1 11 (,3) as a function of η = θ 4, and for θ 1 =.5, 1.0,

5 Z-Transform and difference equations

5 Z-Transform and difference equations 5 Z-Transform and difference equations Z-transform - Elementary properties - Inverse Z-transform - Convolution theorem - Formation of difference equations - Solution of difference equations using Z-transform.

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Efficiency of Profile/Partial Likelihood in the Cox Model

Efficiency of Profile/Partial Likelihood in the Cox Model Efficiency of Profile/Partial Likelihood in the Cox Model Yuichi Hirose School of Mathematics, Statistics and Operations Research, Victoria University of Wellington, New Zealand Summary. This paper shows

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS Abstract. We will introduce a class of distributions that will contain many of the discrete and continuous we are familiar with. This class will help

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

1 Glivenko-Cantelli type theorems

1 Glivenko-Cantelli type theorems STA79 Lecture Spring Semester Glivenko-Cantelli type theorems Given i.i.d. observations X,..., X n with unknown distribution function F (t, consider the empirical (sample CDF ˆF n (t = I [Xi t]. n Then

More information

HT Introduction. P(X i = x i ) = e λ λ x i

HT Introduction. P(X i = x i ) = e λ λ x i MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework

More information

Chapter 3 : Likelihood function and inference

Chapter 3 : Likelihood function and inference Chapter 3 : Likelihood function and inference 4 Likelihood function and inference The likelihood Information and curvature Sufficiency and ancilarity Maximum likelihood estimation Non-regular models EM

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

Partial Solutions for h4/2014s: Sampling Distributions

Partial Solutions for h4/2014s: Sampling Distributions 27 Partial Solutions for h4/24s: Sampling Distributions ( Let X and X 2 be two independent random variables, each with the same probability distribution given as follows. f(x 2 e x/2, x (a Compute the

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

Chapter 3: Maximum Likelihood Theory

Chapter 3: Maximum Likelihood Theory Chapter 3: Maximum Likelihood Theory Florian Pelgrin HEC September-December, 2010 Florian Pelgrin (HEC) Maximum Likelihood Theory September-December, 2010 1 / 40 1 Introduction Example 2 Maximum likelihood

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 25: Semiparametric Models

Introduction to Empirical Processes and Semiparametric Inference Lecture 25: Semiparametric Models Introduction to Empirical Processes and Semiparametric Inference Lecture 25: Semiparametric Models Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Math 152. Rumbos Fall Solutions to Assignment #12

Math 152. Rumbos Fall Solutions to Assignment #12 Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

p y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise.

p y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise. 1. Suppose Y 1, Y 2,..., Y n is an iid sample from a Bernoulli(p) population distribution, where 0 < p < 1 is unknown. The population pmf is p y (1 p) 1 y, y = 0, 1 p Y (y p) = (a) Prove that Y is the

More information

Estimation for two-phase designs: semiparametric models and Z theorems

Estimation for two-phase designs: semiparametric models and Z theorems Estimation for two-phase designs:semiparametric models and Z theorems p. 1/27 Estimation for two-phase designs: semiparametric models and Z theorems Jon A. Wellner University of Washington Estimation for

More information

STAT 526 Spring Final Exam. Thursday May 5, 2011

STAT 526 Spring Final Exam. Thursday May 5, 2011 STAT 526 Spring 2011 Final Exam Thursday May 5, 2011 Time: 2 hours Name (please print): Show all your work and calculations. Partial credit will be given for work that is partially correct. Points will

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

The Weibull Distribution

The Weibull Distribution The Weibull Distribution Patrick Breheny October 10 Patrick Breheny University of Iowa Survival Data Analysis (BIOS 7210) 1 / 19 Introduction Today we will introduce an important generalization of the

More information

7 Influence Functions

7 Influence Functions 7 Influence Functions The influence function is used to approximate the standard error of a plug-in estimator. The formal definition is as follows. 7.1 Definition. The Gâteaux derivative of T at F in the

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

ECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes

ECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes ECE353: Probability and Random Processes Lecture 18 - Stochastic Processes Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu From RV

More information

Probability Distributions Columns (a) through (d)

Probability Distributions Columns (a) through (d) Discrete Probability Distributions Columns (a) through (d) Probability Mass Distribution Description Notes Notation or Density Function --------------------(PMF or PDF)-------------------- (a) (b) (c)

More information

Review Sheet on Convergence of Series MATH 141H

Review Sheet on Convergence of Series MATH 141H Review Sheet on Convergence of Series MATH 4H Jonathan Rosenberg November 27, 2006 There are many tests for convergence of series, and frequently it can been confusing. How do you tell what test to use?

More information

Introducing the Normal Distribution

Introducing the Normal Distribution Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 10: Introducing the Normal Distribution Relevant textbook passages: Pitman [5]: Sections 1.2,

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Chapter 8 Maximum Likelihood Estimation 8. Consistency If X is a random variable (or vector) with density or mass function f θ (x) that depends on a parameter θ, then the function f θ (X) viewed as a function

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Introducing the Normal Distribution

Introducing the Normal Distribution Department of Mathematics Ma 3/13 KC Border Introduction to Probability and Statistics Winter 219 Lecture 1: Introducing the Normal Distribution Relevant textbook passages: Pitman [5]: Sections 1.2, 2.2,

More information

Non-parametric Inference and Resampling

Non-parametric Inference and Resampling Non-parametric Inference and Resampling Exercises by David Wozabal (Last update. Juni 010) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend surfing

More information

Math 0230 Calculus 2 Lectures

Math 0230 Calculus 2 Lectures Math 00 Calculus Lectures Chapter 8 Series Numeration of sections corresponds to the text James Stewart, Essential Calculus, Early Transcendentals, Second edition. Section 8. Sequences A sequence is a

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth

More information

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14 Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Statistics Ph.D. Qualifying Exam

Statistics Ph.D. Qualifying Exam Department of Statistics Carnegie Mellon University May 7 2008 Statistics Ph.D. Qualifying Exam You are not expected to solve all five problems. Complete solutions to few problems will be preferred to

More information

Chapter 6: Random Processes 1

Chapter 6: Random Processes 1 Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

CHAPTER 3: LARGE SAMPLE THEORY

CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 1 CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 2 Introduction CHAPTER 3 LARGE SAMPLE THEORY 3 Why large sample theory studying small sample property is usually

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

Theory of Statistical Tests

Theory of Statistical Tests Ch 9. Theory of Statistical Tests 9.1 Certain Best Tests How to construct good testing. For simple hypothesis H 0 : θ = θ, H 1 : θ = θ, Page 1 of 100 where Θ = {θ, θ } 1. Define the best test for H 0 H

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Tail bound inequalities and empirical likelihood for the mean

Tail bound inequalities and empirical likelihood for the mean Tail bound inequalities and empirical likelihood for the mean Sandra Vucane 1 1 University of Latvia, Riga 29 th of September, 2011 Sandra Vucane (LU) Tail bound inequalities and EL for the mean 29.09.2011

More information

Solution. (i) Find a minimal sufficient statistic for (θ, β) and give your justification. X i=1. By the factorization theorem, ( n

Solution. (i) Find a minimal sufficient statistic for (θ, β) and give your justification. X i=1. By the factorization theorem, ( n Solution 1. Let (X 1,..., X n ) be a simple random sample from a distribution with probability density function given by f(x;, β) = 1 ( ) 1 β x β, 0 x, > 0, β < 1. β (i) Find a minimal sufficient statistic

More information

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).

Uses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ). 1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need

More information

Let X denote a random variable, and z = h(x) a function of x. Consider the

Let X denote a random variable, and z = h(x) a function of x. Consider the EE385 Class Notes 11/13/01 John Stensb Chapter 5 Moments and Conditional Statistics Let denote a random variable, and z = h(x) a function of x. Consider the transformation Z = h(). We saw that we could

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

Chapter 7. Basic Probability Theory

Chapter 7. Basic Probability Theory Chapter 7. Basic Probability Theory I-Liang Chern October 20, 2016 1 / 49 What s kind of matrices satisfying RIP Random matrices with iid Gaussian entries iid Bernoulli entries (+/ 1) iid subgaussian entries

More information

Lecture 2. (See Exercise 7.22, 7.23, 7.24 in Casella & Berger)

Lecture 2. (See Exercise 7.22, 7.23, 7.24 in Casella & Berger) 8 HENRIK HULT Lecture 2 3. Some common distributions in classical and Bayesian statistics 3.1. Conjugate prior distributions. In the Bayesian setting it is important to compute posterior distributions.

More information

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function

More information

Statistics (1): Estimation

Statistics (1): Estimation Statistics (1): Estimation Marco Banterlé, Christian Robert and Judith Rousseau Practicals 2014-2015 L3, MIDO, Université Paris Dauphine 1 Table des matières 1 Random variables, probability, expectation

More information

Slides 8: Statistical Models in Simulation

Slides 8: Statistical Models in Simulation Slides 8: Statistical Models in Simulation Purpose and Overview The world the model-builder sees is probabilistic rather than deterministic: Some statistical model might well describe the variations. An

More information

Asymptotic normality of the L k -error of the Grenander estimator

Asymptotic normality of the L k -error of the Grenander estimator Asymptotic normality of the L k -error of the Grenander estimator Vladimir N. Kulikov Hendrik P. Lopuhaä 1.7.24 Abstract: We investigate the limit behavior of the L k -distance between a decreasing density

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

STAT Chapter 5 Continuous Distributions

STAT Chapter 5 Continuous Distributions STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range

More information

Measure and Integration: Solutions of CW2

Measure and Integration: Solutions of CW2 Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost

More information

Qualifying Exam in Probability and Statistics.

Qualifying Exam in Probability and Statistics. Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

Survival Analysis. Stat 526. April 13, 2018

Survival Analysis. Stat 526. April 13, 2018 Survival Analysis Stat 526 April 13, 2018 1 Functions of Survival Time Let T be the survival time for a subject Then P [T < 0] = 0 and T is a continuous random variable The Survival function is defined

More information

Outline of GLMs. Definitions

Outline of GLMs. Definitions Outline of GLMs Definitions This is a short outline of GLM details, adapted from the book Nonparametric Regression and Generalized Linear Models, by Green and Silverman. The responses Y i have density

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Lecture 8 Inequality Testing and Moment Inequality Models

Lecture 8 Inequality Testing and Moment Inequality Models Lecture 8 Inequality Testing and Moment Inequality Models Inequality Testing In the previous lecture, we discussed how to test the nonlinear hypothesis H 0 : h(θ 0 ) 0 when the sample information comes

More information

UNIVERSITY OF MANITOBA

UNIVERSITY OF MANITOBA Question Points Score INSTRUCTIONS TO STUDENTS: This is a 6 hour examination. No extra time will be given. No texts, notes, or other aids are permitted. There are no calculators, cellphones or electronic

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Chapter 7 Maximum Likelihood Estimation 7. Consistency If X is a random variable (or vector) with density or mass function f θ (x) that depends on a parameter θ, then the function f θ (X) viewed as a function

More information

Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2)

Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2) Lectures on Machine Learning (Fall 2017) Hyeong In Choi Seoul National University Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2) Topics to be covered:

More information

2 2 + x =

2 2 + x = Lecture 30: Power series A Power Series is a series of the form c n = c 0 + c 1 x + c x + c 3 x 3 +... where x is a variable, the c n s are constants called the coefficients of the series. n = 1 + x +

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review

More information

This does not cover everything on the final. Look at the posted practice problems for other topics.

This does not cover everything on the final. Look at the posted practice problems for other topics. Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation

BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)

More information

Lecture 17: Likelihood ratio and asymptotic tests

Lecture 17: Likelihood ratio and asymptotic tests Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f

More information

Final Examination Statistics 200C. T. Ferguson June 11, 2009

Final Examination Statistics 200C. T. Ferguson June 11, 2009 Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean

More information