22S:194 Statistical Inference II Homework Assignments. Luke Tierney

Size: px
Start display at page:

Download "22S:194 Statistical Inference II Homework Assignments. Luke Tierney"

Transcription

1 S:194 Statistical Iferece II Homework Assigmets Luke Sprig 003

2 Assigmet 1 Problem 6.3 Problem 6.6 Due Friday, Jauary 31, 003. Problem 6.9 Problem 6.10 Due Friday, Jauary 31, 003. Problem 6.14 Problem 6.0 Due Friday, Jauary 31, 003. Solutios 6.3 The joit desity for the data is f(x 1,..., x µ, σ) 1 σ exp{ (x i µ)/σ} 1 (µ, ) (x i ) 1 { σ exp σ x + µ } 1 (µ, ) (x (1) ) σ So, by the factorizatio criterio, (X, X (1) ) is sufficiet for (µ, σ). 6.6 The desity of a sigle observatio is 1 f(x α, β) Γ(α)β α xα 1 e x/β 1 {(α Γ(α)β exp 1) log x 1β } x α This is a two-parameter expoetial family, so (T 1, T ) ( log X i, X i ) is sufficiet for (α, β). Sice X i is a oe-to-oe fuctio of log X i, the pair ( X i, X i ) is also sufficiet for (α, β). 6.9 a. Doe i class already. 1

3 b. Θ R, X R, ad Suppose x (1) y (1). The f(x θ) e x i +θ 1 (θ, ) (x (1) ) f(x θ) exp{ y i x i }f(y θ) for all θ. So k(x, y) exp{ y i x i } works. Suppose x (1) y (1). The for some θ oe of f(x θ), f(y θ) is zero ad the other is ot. So o k(x, y) exists. So T (X) X (1) is miimal sufficiet. c. Θ R, X R. The support does ot deped o θ so we ca work with ratios of desities. The ratio of desities for two samples x ad y is (xi θ) (1 + e (y i θ) ) (1 + e (x i θ) ) f(x θ) f(y θ) e e (y i θ) e xi e y i (1 + e y i e θ ) (1 + e x ieθ ) If the two samples cotai idetical values, i.e. if they have the same order statistics, the this ratio is costat i θ. If the ratio is costat i θ the the ratio of the two product terms is costat. These terms are both polomials of degree i e θ. If two polyomials are equal o a ope subset of the real lie the they are equal o the etire real lie. Hece they have the same roots. The roots are { e x i } ad { e y i } (each of degree ). If those sets are equal the the sets of sample values {x i } ad {y i } are equal, i.e. the two samples must have the same order statistics. So the order statistics (X (1),..., X () ) are miimal sufficiet. d. Same idea: f(x θ) f(y θ) (1 + (yi θ) ) (1 + (xi θ) ) If the two samples have the same order statistics the the ratio is costat. If the ratio is costat for all real θ the two polyomials i θ are equal o the complex plae, ad so the roots must be equal. The roots are the complex umbers θ x j ± i, θ y j ± i with i 1. So agai the order statistics are miimal sufficiet.

4 e. Θ R, X R. The support does ot deped o θ so we ca work with ratios of desities. The ratio of desities for two samples x ad y is f(x θ) f(y θ) exp{ y i θ x i θ } If the order statistics are the same the the ratio is costat. Suppose the order statistics differ. The there is some ope iterval I cotaiig o x i ad o y i such that #{x i > I} #{y i > I}. The slopes o I of x i θ ad y i θ as fuctios of θ are So y i θ x i θ has slope (#{x i > I}), (#{y i > I}) (#{x i > I} #{y i > I}) 0 ad so the ratio is ot costat o I. So agai the order statistic is miimal sufficiet To thos that the miimal sufficiet statistic is ot complete we eed to fis a fuctio g that is ot idetically zero but has expected value zero for all θ. Now E[X (1) ] θ E[X () ] θ So g(x (1), X () ) X () X (1) 1 +1 idetically zero for > 1. has expected value zero for all θ but is ot 6.14 X 1,..., X are i.i.d. from f(x θ). This meas Z i X i θ are i.i.d. from f(z). Now So X X Z Z is acillary. 6.0 a. The joit desity of the data is X Z + θ X Z + θ f(x 1,..., x θ) ( xi ) 1 (0,θ) (x () ) 1 θ T X () is sufficiet (ad miimal sufficiet). X () has desity f T (t θ) t 1 1 θ 1 (0,θ)(t) 3

5 Thus for all θ > 0 meas 0 0 θ 0 θ 0 g(t) θ t 1 dt g(t)t 1 dt for almost all θ > 0, ad this i tur implies g(t)t 1 g(t) 0 for all t > 0. So T X () is complete. b. Expoetial family, T (X) log(1 + X i ), 0 ad hece {w(θ) : θ Θ} (1, ) which is a ope iterval. c. Expoetial family, T (X) X i, {w(θ) : θ Θ} {log θ : θ > 1} (0, ) which is a ope iterval. d. Expoetial family, T (X) e X i, {w(θ) : θ Θ} { e θ : θ R} (, 0) which is a ope iterval. e. Expoetial family, T (X) X i, {w(θ) : θ Θ} {log θ log(1 θ) : 0 θ 1} [, ] which cotais a ope iterval. 4

6 Assigmet Problem 7.6 Problem 7.11 Due Friday,February 7, 003. Problem 7.13 Problem 7.14 Due Friday, February 7, 003. Solutios 7.6 The joit PDF of the data ca be writte as a. X (1) is sufficiet. f(x θ) θ x i 1 [θ, ) (x (1) ) b. The likelihood icreases up to x (1) ad the is zero. So the MLE is θ X (1). c. The expected value of a sigle observatio is E θ [X] θ xθ 1 x dx θ θ 1 x dx So the (usual) method of momets estimator does ot exist a. The likelihood ad log likelihood are L(θ x) θ ( xi ) θ 1 log L(θ x) log θ + (θ 1) log x i The derivative of the log likelihood ad its uique root are d dθ L(θ x) θ + log x i θ log xi Sice log L(θ x) as θ 0 or θ ad the likelihood is differetiable o the parameter space this root is a global maximum. 5

7 Now log X i Expoetial(1/θ) Gamma(1, 1/θ). So log X i Gamma(, 1/θ). So [ ] θ E log Xi xγ() x 1 e θx dx ad So 0 Γ( 1) θ Γ() [ ( ) ] Γ( ) E θ log Xi Γ() Var( θ) θ as. 1 ( 1 1 ) 1 b. The mea of a sigle observatio is E[X] 1 0 θx θ dx So X θ θ + 1 is the method of momets equatio, ad θx + X θ θ(x 1) X θ X 1 X 1 θ ( 1)( ) θ θ ( 1) ( ) θ 0 θ θ + 1 We could use the delta method to fid a ormal approximatio to the distributio of θ. The variace of the approximate disrtibtio is larger tha the variace of the MLE. or or 7.13 The likelihood is L(θ x) 1 exp{ x i θ } We kow that the sample media X miimizes x i θ, so θ X. The miimizer is uique for odd. For eve ay value betwee the two middle order statistics is a miimizer. 6

8 7.14 We eed the joit desity of W, Z: P (W 1, Z [z, z + h)) P (X [z, z + h), Y z + h) + o(h) h 1 λ e z/λ e z/µ + o(h) h 1 λ e z( 1 λ + µ) 1 + o(h) ad, similarly, P (W 0, Z [z, z + h)) h 1 µ e z( 1 λ + 1 µ) + o(h) So ad therefore 1 f(w, z) lim h 0 h P (W w, Z [z, z + h)) 1 λ w µ e z( 1 1 w λ + 1 µ) Sice this factors, f(w 1,..., w, z 1,..., z λ, µ) 1 λ w iµ w i e zi ( 1 λ + 1 µ ) f(w 1,..., w, z 1,..., z λ, µ) 1 λ w i e zi /λ 1 µ w i e zi /µ it is maximized by maximizig each term separately, which produces λ zi wi µ zi w i I words, if the X i represet failure times, the λ total time o test umber of observed failures 7

9 Assigmet 3 Problem 7. Problem 7.3 Due Friday, February 14, 003. Problem 7.33 Due Friday, February 14, 003. Problem 7.38 Problem 7.39 Due Friday, February 14, 003. Solutios 7. We have a. The joit desity of X, θ is f(x, θ) f(x θ)f(θ) exp X θ N(θ, σ /) θ N(µ, τ ) { σ (x θ) 1 } (θ µ) τ This is a joit ormal distributio. The meas ad variaces are E[θ] µ E[X] E[θ] µ Var(θ) τ Var(X) Var(θ) + σ τ + σ The covariace ad correlatio are Cov(θ, X) E[Xθ] µ E[θ ] µ µ + τ µ τ ρ τ (τ + σ /)σ / b. This meas that the margial distributio of X is N(µ, τ + σ /). 8

10 c. The posterior distribuito of θ is { f(θ x) exp σ (X θ) 1 } (θ µ) τ {( exp σ x + 1 ) τ µ θ 1 ( σ + 1 ) } θ τ This is a ormal distributio with mea ad variace ( Var(θ X) σ + 1 ) 1 τ σ / τ τ + σ / ( E[θ X] Var(θ X) σ X + 1 ) τ µ τ τ + σ / X + σ / τ + σ / µ 7.3 We have S σ Gamma(( 1)/, σ /( 1)) ad f(σ ) 1 1 ) Γ(α)β α (σ ) α+1 e 1/(βσ The posterior distributio σ S is therefore f(σ s 1 ) (σ ) ( 1)/ e s ( 1)/(σ ) 1 ) (σ ) α+1 e 1/(βσ IG(α + ( 1)/, (1/β + ( 1)s /) 1 ) If Y IG(a, b), the V 1/Y Gamma(a, b). So So the posterior mea of σ is 1 1 E[Y ] E[1/V ] 0 v Γ(a)b a va 1 e v/b dv 1 z a e z dz bγ(a) 0 Γ(a 1) 1 bγ(a) b(a 1) E[σ S ] 1/β + ( 1)S / α + ( 3)/ 9

11 7.33 From Example the MSE of p B is ( ) E[( p B p) p(1 p) p + α ] (α + β + ) + α + β + p ( p(1 p) ( /4 + /4 + ) + p + /4 p /4 + /4 + which is costat i p a. The populatio desity is p(1 p) + (p + /4 p( + )) ( + ) p(1 p) + ( /4 p ) ( + ) ( ( ) p(1 p) + (1/ p) + ) ( ( p p + 1/4 + p p) ) + ) /4 ( + ) θx θ 1 θx 1 e θ log x So T (X) 1 log X i is efficiet for τ(θ) E θ [log X 1 ]. τ(θ) b. The populatio desity is 1 log xθx θ 1 dx 0 0 yθe θy dy 1/θ log θ θ 1 θx log θ log θ ex θ 1 So log f(xi θ) (log log θ log(θ 1)) + xi log θ ) 7.39 Doe i class. θ log f(x i θ) ( 1 θ log θ 1 θ So X is efficiet for τ(θ) ( 1 log θ ) θ 1 + x θ θ ) θ 1 + x θ θ 1 θ 1 log θ ( ( θ x θ 1 1 )) log θ 10

12 Assigmet 4 Problem 7.44 Problem 7.48 Due Friday, February 1, 003. Problem 7.6 Problem 7.63 Problem 7.64 Due Friday, February 1, 003. Solutios 7.44 X 1,..., X are i.i.d. N(θ, 1). W X 1/ has E[W ] θ θ Sice X is sufficiet ad complete, W is the UMVUE of θ. The CRLB is Now (θ) I (θ) (θ) 4θ So E[X ] θ + 1 E[X 4 ] E[(θ + X/ ) 4 ] θ 4 + 4θ 3 1 E[Z] + 6θ 1 E[Z ] + 4θ 1 3/ E[Z3 ] + 1 E[Z4 ] θ 4 + 6θ Var(W ) Var(X ) θ 4 + 6θ θ4 1 θ 4 θ + > 4 θ 11

13 7.48 a. The MLE p 1 Xi has variace Var( p) p(1 p). The iformatio is [ I (p) E ( ) ) ] X i log(1 p) 7.6 a. ( Xi log p + θ )] X i 1 p [ ( Xi E θ p ( p p p ) (1 p) p + 1 p p(1 p) So the CRLB is p(1 p). b. E[X 1 X X 3 X 4 ] p 4. X i is sufficiet ad complete. E[W X i t] P (X 1 X X 3 X 4 1 X i t) { 0 t < 4 P (X 1 X X 3 X 4 1, 5 X it 4) P ( t 4 1 X it) 0 t < 4 p 4 ( 4 t 4)p t 4 (1 p) t t 4 So the UMVUE is (for 4) ( t)p t (1 p) t t(t 1)(t )(t 3) ( 1)( )( 3) p( p 1/)( p /)( p 3/) (1 1/)(1 /)(1 3/) No ubiased estimator eists for < 4. R(θ, ax + b) E θ [(ax + b θ) ] a Var(X) + (aθ + b θ) a σ + (b (1 a)θ) b. For η σ τ +σ, So δ π E[θ X] (1 η)x + ηµ R(θ, δ π ) (1 η) σ + (ηµ ηθ) (1 η) σ + η (µ θ) η(1 η)τ + η (µ θ) 1

14 c. B(π, δ π ) E[E[(θ δ π ) X]] E[E[(θ E[θ X]) X]] [ ] σ τ E[Var(θ X)] E σ τ σ + τ σ + τ ητ 7.63 From the previous problem, whe the prior mea is zero the risk of the Bayes rule is R(θ, δ π ) τ 4 + θ (1 + τ ) So for τ 1 R(θ, δ π ) θ ad for τ 10 R(θ, δ π ) θ With a smaller τ the risk is lower ear the prior mea ad higher far from the prior mea For ay a (a 1,..., a ) E[ L(θ i, a i ) X x] E[L(θ i, a i ) X x] The idepedece assumptios imply that (θ 1, X i ) is idepedet of {X j : j i} ad therefore. E[L(θ i, a i ) X x] E[L(θ i, a i ) X i x i ] for each i. Sice δ π i is a Bayes rule for estimatig θ i with loss L(θ i, a i ) we have E[L(θ i, a i ) X i x i ] E[L(θ i, δ π i (X i )) X i x i ] E[L(θ i, δ π i (X i )) X x] with the fial equality agai followig from the idepedece assumptios. So E[L(θi, a i ) X i x i ] E[L(θ i, δ π i (X i )) X x] ad therefore E[ L(θ i, δ π i (X i )) X x] E[ L(θ i, a i ) X x] E[ L(θ i, a i ) X i x i ] for all a, which implies that δ π (X) (δ π 1 (X 1 ),..., δ π (X )) is a Bayes rule for estimatig θ (θ 1,..., θ ) with loss L(θ i, a i ) ad prior π i (θ i ). 13

15 Assigmet 5 Problem 8.5 Problem 8.6 Due Friday, February 8, 003. Solutios 8.5 a. The likelihood ca be writte as L(θ, ν) θ ν θ 1 x θ+1 [ν, ) (x (1) ) i For fixed θ, this icreases i ν for ν x (1) ad is the zero. So ν x (1), ad L (θ) max L(θ, ν) θ ( ) x θ (1) 1 ν x i xi θ e θt So θ /T. b. The likelihood ratio criterio is Λ(x) L (1) L ( θ) e T ( T ) e cost T e T This is a uimodal fuctio of T ; it icreases from zero to a maximum at T ad the decreases back to zero. Therefore for ay c > 0 R {x : Λ(x) < c} {x : T < c 1 or T > c } c. The coditioal desity of X,..., X, give X 1 x ad X i X 1, is f(x 1 ) f(x ) f(x,..., x x 1, x i x 1 ) f(x 1 )P (X > X 1 X 1 x 1 ) P (X > X 1 X 1 x 1 ) f(x ) f(x ) P (X > x 1 ) P (X > x 1 ) ad P (X i > y) y θν θ νθ dx xθ+1 y θ 14

16 So f(x,..., x x 1, x i x 1 ) θ 1 Let Y 1 X i /x 1, i,...,. The x θ 1 1 x θ+1 {xi >x 1 } i i f Y (y,..., y x 1, x i > x 1 ) x 1 1 f(y x 1,..., y x 1 x 1, x i > x 1 ) θ 1 y θ+1,..., y θ+1 i.e. Y,..., Y are i.i.d. with desity θ/y θ+1, ad T log Y + + log Y. If Z log Y, the f Z (z) f Y (y) dy dz θ e (θ+1)z ez θe θz ad thus T {X 1 x 1, X i > X 1 } Gamma( 1, 1/θ). By symmetry, this meas that T X (1) Gamma( 1, 1/θ), which is idepetet of X (1), so T has this distributio ucoditioaly as well. For θ 1, T Gamma( 1, 1) T Gamma( 1, ) χ a. The likelihood ratio criterio is ( ) +m +m Xi + Y i e m Λ ( ) ( ) m Xi e Yi e m ( + m)+m ( Xi ) ( Y i ) m m m ( X i + Y i ) +m The test rejects if this is small. b. The likelihood ratio criterio is of the form Λ cost T (1 T ) m. So the test rejects if T is too small or too large. c. Uder H 0, T Beta(, m). 15

17 Assigmet 6 Problem 8.14 Problem 8.17 Due Friday, March 7, 003. Problem 8.15 Problem 8.5 Due Friday, March 7, 003. Problem 8.8 Problem 8.33 Due Friday, March 7, 003. Solutios 8.14 Use R {x : x i > c}. α 0.01 meas 0.01 P ( X i > c p 0.49) P ( Z > c 0.49 ) So c β 0.99 implies 0.99 P ( X i > c p 0.51) P ( Z > c 0.51 ) So So c c c or (100) (.36)

18 8.17 For X 1,..., X i.i.d. Beta(µ, 1) L(µ x) µ x µ 1 i µ e µ log x i e log x i So ad Λ(x) µ log xi ( ) L( µ x) exp{ log x i } log xi ( ) +m +m log xi + log y i exp{ m log xi log y i } ( ) ( ) m log xi exp{ log xi } m log yi exp{ m log yi } So for suitable c 1, c. ( + m)+m T (1 T ) m m m {Λ < c} {T < c 1 or T > c } Uder H 0, log X i, log Y i are i.i.d. expoetial, so T Beta(, m). To fid c 1 ad c either cheat ad use equal tail probabilities (the right thig to do by symmetry if m), or solve umerically L(σ 1 x) exp (σ ) / ( ) L(σ1) / L(σ0) σ0 exp σ 1 { 1 } x σ i { 1 ( x 1 i 1 )} σ0 σ1 If σ 1 > σ the this is icreasig i x i. So L(σ1)/L(σ 0) > k for some k if ad oly if Xi > c for some c. Uder H 0 : σ σ 0, Xi /σ0 χ, so c σ 0χ,α 17

19 8.5 a. For θ > θ 1 { (x θ ) g(x θ ) exp g(x θ 1 ) { } cost exp{x(θ θ 1 )/σ } exp (x θ 1) σ σ } This i icreasig i x sice θ θ 1 > 0. b. For θ > θ 1 g(x θ ) g(x θ 1 ) θx e θ /x! θ x 1e θ 1 /x! cost which is icreasig x sice θ /θ > 1. c. For θ > θ 1 g(x θ ) g(x θ 1 ) ( ) x θ ( ) x θ s (1 θ ) x ( ) θ s 1 (1 θ 1 ) cost θ /(1 θ ) x θ 1 /(1 θ 1 ) ( x This is icreasig i x sice θ/(1 θ) is icreasig i θ. θ 1 ) x 8.8 a. Let f(x θ ) f(x θ 1 ) eθ 1 θ (1 + e x θ1 ) (1 + e x θ ) cost g(y) A + y B + y g (y) B + y A y (B + y) B A (B + y) ( e θ 1 + e x e θ + e x ) The g (y) 0 if B A. So we have MLR i x. b. Sice the ratio is icreasig i x, the most powerful test is of the form R {x > c}. Now 1 F θ (x) e x θ So for H 0 : θ 0 ad α 0. 1/(1 + e c ), so The power is β(1) 1 + e c 5 e c 4 c log e log(4) /e c. Sice we have MLR, the test is UMP. This is true for ay θ 0. This oly works for 1; otherwise there is o oe-dimesioal sufficiet statistic. 18

20 8.33 a. b. P (Y 1 > k or Y > 1 θ 0) P (Y 1 > k θ 0) (1 k) α So k 1 α 1/. c. β(θ) P (Y > 1 or Y 1 > k θ) P (Y > 1 θ) + P (Y 1 > k ad Y 1) { 1 θ > 1 1 (1 θ) + (1 max{k, θ}) θ 1 f(x θ) 1 (θ, ) (Y 1 )1 (,θ+1) (Y ) Fix θ > 0. Suppose k θ. The β(θ ) 1. Suppose k > θ. Take k 1 i the NP lemma. The f(x θ ) < f(x θ 0 ) f(x θ ) > f(x θ 0 ) 0 < Y 1 < θ < k, so x R 1 < Y < θ + 1, so x R So R is a NP test for ay θ. So R is UMP. d. The power is oe for all if θ > 1. 19

21 Assigmet 7 Problem 8.31 Problem 8.34 Due Friday, March 14, 003. Problem 8.49 Problem 8.54 Due Friday, March 14, 003. Problem 8.55 Problem 8.56 Due Friday, March 14, 003. Solutios 8.31 a. The joit PMF of the data is For λ > λ 1, f(x λ) λ x i e λ xi! f(x λ ) f(x λ 1 ) ( λ λ 1 ) x i e (λ 1 λ ) is icreasig i x i, so has MLR. So a test which rejects the ull hypothesis if X > c is UMP of its size. b. If λ 1, the X AN(1, 1/), so c 1 + z α /. If λ, the X AN(, /), so P (X > 1 + z α / ( ( ) ) zα λ ) P Z > 1 ( P Z > z ) α For α 0.05, z α For β() 0.9, z α z

22 so (z α + z 1 β ) ( ) 11.7 so this suggest usig 1. It might be better to use the variace-stabilizig trasformatio X. Either way, use of the CLT is a bit questioable a. T f(t θ), T 0 f(t). So This is icreasig i θ. P (T > c θ) P (T 0 + θ > c) b. First approach: MLR implies that T > c is UMP of θ 1 agaist θ for size α P (T > c θ 1 ). Sice φ (t) α is size α ad β (t) E[φ (T ) θ] α for all θ, UMP implies that β(θ ) β (θ ) α β(θ 1 ). Secod approach: Let α P (T > c θ 1 ), h(t) g(t θ )/g(t θ 1 ). The P (T > c θ ) α E[φ(T ) α θ ] Ca also go back to the NP proof. E[(φ(T ) α)h(t ) θ 1] E[h(T ) θ 1 ] h(c)e[φ(t ) α θ 1] E[h(T ) θ 1 ] h(c)(α α) E[h(T ) θ 1 ] a. The p-value is P (X 7 θ 0.5) 1 P (X 6 θ 0.5) This ca be computed i R with > 1 - pbiom(6,10,0.5) [1] b. The p-value is P (X 3 λ 1) 1 P (X λ 1) This ca be computed i R with > 1 - ppois(, 1) [1] c. If λ 1 the the sufficiet statistic T X 1 + X + X 3 has a Poisso distributio with mea λ T 3. The observed value of T is t 9, so the p-value is P (T 9 λ T 3) 1 P (T 8 λ T 3)

23 8.54 a. From problem 7. the posterior distributio of θ x is ormal with mea ad variace So P (θ 0 x) P ( τ E[θ X x] τ + σ / x τ Var(θ X x) 1 + τ /σ ) Z (τ /(τ + σ /))x τ /(1 + τ /σ ) P ( Z ) τ (σ /)(τ + σ /) x b. The p-value is P (X x θ 0) P ( Z 1 ) σ/ x c. For τ σ 1 the Bayes probability is larger tha the p-value for x > 0 sice 1 (σ /)(1 + σ /) < 1 1/ d. As, τ (σ /)(τ + σ /) x 1 (σ /)(1 + σ /(τ )) x 1 σ/ ad therefore P (θ 0 x) coverges to the p-value The risk fuctios for these tests are give by { b(θ 0 θ)(1 β(θ)) for θ < θ 0 R(θ, δ) c(θ θ 0 )β(θ) for θ θ 0 { b(θ 0 θ)p (Z + θ > θ 0 z α ) for θ < θ 0 c(θ θ 0 )P (Z + θ θ 0 z α ) for θ θ 0 { b(θ 0 θ)(1 Φ(θ 0 θ z α ) for θ < θ 0 c(θ θ 0 )Φ(θ 0 θ z α ) for θ θ 0 where Z is a stadard ormal radom variable ad Φ is the stadard ormal CDF The risk fuctio for a test δ ad zero-oe loss is { 1 β(θ) for θ Θ 1 R(θ, δ) β(θ) for θ Θ 0

24 For the two tests i the problem this produces { 1 P (X 1 p) for p > 1/3 R(p, δ I ) P (X 1 p) for p 1/3 ad R(p, δ I ) A graph of the risk fuctios is { 1 P (Xge41 p) for p > 1/3 P (X 4 p) for p 1/3 r Test I Test II p Test II has lower risk for large ad small values of p; Test I has lower risk for p betwee 1/3 ad approximately 5.5. These risk fuctio graphs were produced i R usig b <- pbiom(1, 5, p) b1 <- pbiom(1, 5, p) b <- 1-pbiom(3, 5, p) r1<-ifelse(p < 1/3, b1, 1-b1) r<-ifelse(p < 1/3, b, 1-b) plot(p,r,type"l") lies(p,r1, lty ) leged(0.7,0.7,c("test I", "Test II"), ltyc(1,)) 3

25 Assigmet 8 Problem 9.1 Problem 9. Due Friday, March 8, 003. Problem 9.4 Due Friday, March 8, 003. Problem 9.1 Problem 9.13 Due Friday, March 8, 003. Solutios 9.1 Sice L U, Also So ad thus {L θ U} c {L > θ} {U < θ} {L > θ} {U < θ} P ({L θ U} c ) P (L > θ) + P (U < θ) α 1 + α P (L θ U) 1 α 1 α 9. This ca be iterpreted coditioally or ucoditioally. Give X 1,..., X, P (X +1 x ± 1.96/ x) P (Z x θ ± 1.96/ ) Equality holds oly if 1 ad x θ. Ucoditioally, P (Z 0 ± 1.96/ ) < 0.95 if > if 1 P (X +1 X ± 1.96/ ) P (Z Z 0 ± 1.96/ ( ) P Z ± 1 + 1/ P (Z 0 ± 1.96/ + 1) < 0.95 for 1 4

26 9.4 I this problem a. with X i are i.i.d. N(0, σ X) Y i / λ 0 are i.i.d. N(0, σ X) Λ ( ) (+m)/ +m X i + Yi /λ 0 ( ) / ( X i CT / (1 T ) m/ T Y i λ 0 X i So Λ < k if ad oly if F < c 1 or F > c. ) m/ m Y i /λ m F b. F/λ 0 F m,. Choose c 1, c so c 1 F m,,1 α1, c F m,,α, α 1 + α α, ad f(c 1 ) f(c ) for ( ) / ( 1 f(t) 1 + mt 1 1 ) m/ 1 + mt c. This is a 1 α level CI. 9.1 All of the followig are possible choices: 1. (X θ)/s t 1. ( 1)S /θ χ 1 3. (X θ)/ θ N(0, 1) A(λ) {X, Y : c 1 F c } { } Y X, Y : c 1 i /m λ Xi / c { } Y C(λ) λ : i /m Y c X i / λ i /m c 1 X i / [ ] Y i /m Y c X i /, i /m c 1 X i / The first two produce the obvious itervals. For the third, look for those θ with z α/ < Q(X, θ) X θ θ/ < z α/ If x 0, the Q(x, ) is decreasig, ad the cofidece set is a iterval. 5

27 z α/ 0 L U θ -z α/ If x < 0, the Q(X, θ) is egative with a sigle mode. If x is large eough (close eough to zero, the the cofidece set is a iterval, correspodig to the two solutios to Q(x, θ) z α/. If x is too small, the there are o solutios ad the cofidece set is empty. 0 L U 0 -z α/ -z α/ 9.13 a. U log X exp(1/θ), θu exp(1). So P (Y/ < θ < Y ) P (1/ < θu < 1) e 1 e 1/

28 b. θu exp(1). P ( log(1 α/) < θu < log(α/)) 1 α ( log(1 α/) P < θ < log(α/) ) 1 α U U [ log(1 α/)y, log(α/)y ] [0.479Y, 0.966Y ] c. The iterval i b. is a little shorter, though it is ot of optimal legth. b a

29 Assigmet 9 Problem 9.7 Problem 9.33 Due Friday, April 4, 003. Problem 10.1 Due Friday, April 4, 003. Solutios 9.7 a. The posterior desity is π(λ X) 1 λ e x i /λ 1 λ 1 e [1/b+ λ+a+1 a+1 e 1/(bλ) xi ]/λ IG( + a, [1/b + x i ] 1 ) The iverse gamma desity is uimodal, so the HPD regio is a iterval [c 1, c ] with c 1, c chose to have equal desity values ad P (Y > 1/c 1 ) + P (Y < 1/c ) α, with Y Gamma( + a, [1/b + x i ] 1 )..b The distributio of S is Gamma(( 1)/, σ /( 1)). The resultig posterior desity is therefore π(σ s (s ) ( 1)/ 1 1 ) (σ /( 1)) ( 1)/ e ( 1)s /σ ) (σ ) a+1 e 1/(bσ 1 ]/σ (σ e [1/b+( 1)s )( 1)/+a+1 IG(( 1)/ + a, [1/b + ( 1)s ] 1 ) As i the previous part, the HPD regio is a iterval that ca be determied by solvig a correspodig set of equatios. c. The limitig posterior distributio is IG(( 1)/, [( 1)s ] 1 ). The limitig HPD regio is a iterval [c 1, c ] with c 1 ( 1)s /χ 1,α 1 ad c ( 1)s /χ 1,1 α where α 1 + α α ad c 1, c have equal posterior desity values. 8

30 9.33 a. Sice 0 C a (x) for all a, x, P µ0 (0 C a (X)) 1 For µ < 0, P µ (µ C a (X)) P µ (mi{0, X a} µ) P µ (X a µ) P µ (X µ a) 1 α if a z α. For µ > 0, P µ (µ C a (X)) P µ (max{0, X a} µ) P µ (X + a µ) P µ (X µ a) 1 α if a z α. b. For π(µ) 1, f(µ x) N(x, 1). P (mi{0, x a} µ max{0, x a} X x) P (x a µ x + a X x) 1 α if a z α ad z α x z α. For a z α ad x > z α, P (mi{0, x a} µ max{0, x a} X x) P ( x Z a) P (Z z) 1 α as x The mea is µ θ/3, so the method of momets estimator is W 3X. By the law of large umbers X P µ θ/3, so W 3X P θ. 9

31 Assigmet 10 Problem 10.3 Problem 10.9 (but oly for e λ ; do ot do λe λ ) Due Friday, April 11, 003. Problem: Fid the approximate joit distributio of the maximum likelihood estimators i problem 7.14 of the text. Due Friday, April 11, 003. Problem: I the settig of problem 7.14 of the text, suppose 100, W i 71, ad Z i 780. Also assume a smooth, vague prior distributio. Fid the posterior probability that λ > 100. Due Friday, April 11, 003. Solutios 10.3 a. The derivative of the log likelihood is ( ) θ log θ (Xi θ) θ θ + (Xi θ) + (Xi θ) 1θ θ θ + X i θ θ W θ θ θ So the MLE is a root of the quadratic equatio θ + θ W 0. The roots are θ 1, 1 ± W The MLE is the larger root sice that represets a local maximum ad sice the smaller root is egative. b. The Fisher iformatio is So θ AN(θ, θ (θ+1/) ). I (θ) E E[W ] θ 3 [ ( )] θ + θ W θ θ ] θ/ E[W θ θ 3 θ + θ θ/ θ + 1/ θ 3 θ 30

32 10.9 (but oly for e λ ; do ot do λe λ ) The UMVUE of e λ is V (1 1/) X ad the MLE is e X. Sice (V e X ) O(1/) O(1/ ) both (V e λ ) ad (e X e λ ) have the same ormal limitig distributio ad therefore their ARE is oe. I fiite samples oe ca argue that the UMVUE should be preferred if ubiasedess is deemed importat. The MLE is always larger tha the UMVUE i this case, which might i some cotexts be a argumet for usig the UMVUE. A compariso of mea square errors might be useful. For the data provided, e X ad V Problem: Fid the approximate joit distributio of the maximum likelihood estimators i problem 7.14 of the text. Solutio: The log-likelihood is So l(λ, µ) w i log λ ( w i ) log µ z i ( 1 λ + 1 µ wi l(λ, µ) λ λ + λ wi l(λ, µ) + µ µ wi l(λ, µ) λ λ λ 3 wi l(λ, µ) µ µ l(λ, µ) 0 λ µ E[W i ] µ λ + µ E[Z i ] λµ λ + µ zi zi zi µ zi µ 3 ) So ] E [ l(λ, µ) λ ] E [ l(λ, µ) µ λ µ λ + µ λ µ λ + µ λ µ λ + µ µ λ λ + µ 31

33 ad thus ] [ λ AN µ ( [λ ], µ [ λ (λ+µ) 0 µ 0 µ (λ+µ) λ ]) Problem: I the settig of problem 7.14 of the text, suppose 100, W i 71, ad Z i 780. Also assume a smooth, vague prior distributio. Fid the posterior probability that λ > 100. Solutio: The MLE s are Zi λ Wi µ Zi W i The observed iformatio is Î ( λ, µ) [ Wi 0 λ 0 W i µ Thus the margial posterior distributio of λ is approximately N( λ, λ / W i ) N(109.89, ) ] So P (λ > 100 X) P ( Z > )

34 Assigmet 11 Problem: Let X 1,..., X be a radom sample from a Pareto(1, β) distributio with desity f(x β) β/x β+1 for x 1. Fid the asymptotic relative efficiecy of the method of momets estimator of β to the MLE of β. Due Friday, April 18, 003. Problem: Let X 1,..., X be i.i.d. Poisso(λ) ad let W e X. Fid the parametric bootstrap variace Var (W ) ad show that Var (W )/Var(W ) P 1 as. Due Friday, April 18, Let X 1,..., X be a radom sample that may come from a Poisso distributio with mea λ. Fid the sadwich estimator of the asymptotic variace of the MLE λ X.. Let g(x) e x for x > 0 be a expoetial desity with mea oe ad let f(x θ) be a N(θ, 1) desity. Fid the value θ correspodig to the desity of the form f(x θ) that is closest to g i Kullback-Liebler divergece. Due Friday, April 18, 003. Solutios Problem: Let X 1,..., X be a radom sample from a Pareto(1, β) distributio with desity f(x β) β/x β+1 for x 1. Fid the asymptotic relative efficiecy of the method of momets estimator of β to the MLE of β. Solutio: The mea is fiite ad E[X] β/(β 1) if β > 1. So the method of momets estimator is β MM X/(X 1) if X > 1 ad udefied otherwise. β The variace is fiite ad Var(X) if β >. So for β > the (β 1) (β ) cetral limit theorem implies that ( ) (X β/(β 1)) D β N 0, (β 1) (β ) Sice β MM g(x) with g(x) x/(x 1) ad g (x) 1/(x 1), we have g (β/(β 1)) (β 1) ad the delta method shows that ( ) ( βmm β) D N 0, g (β/(β 1)) β N(0, β(β 1) /(β )) (β 1) (β ) 33

35 The MLE is β /( log X i ) ad the Fisher iformatio is I (β) /β. So the asymptotic relative efficiecy of the method of momets estimator to the MLE is ARE( β MM, β) β β(β 1) /(β ) β(β ) (β 1) for β >. For β the method of momets estimator will exist for large but will ot he -cosistet; it makes sese to say the asymptotic relative efficiecy is zero i this case. Problem: Let X 1,..., X be i.i.d. Poisso(λ) ad let W e X. Fid the parametric bootstrap variace Var (W ) ad show that Var (W )/Var(W ) P 1 as. Solutio: Usig the MGF of the Poisso distributio we have The variace of W is therefore E[W λ] M X i ( 1/) exp{λ(e 1/ 1)} E[W λ] M X i ( /) exp{λ(e / 1)} Var(W λ) E[W λ] E[W λ] exp{λ(e / 1)} exp{λ(e 1/ 1)} exp{λ(e 1/ 1)}(exp{λ(e / e 1/ + 1)} 1) exp{λ(e 1/ 1)}(exp{λ(e 1/ 1) } 1) λb g(a λ, b λ) with g(x, y) { e x ey 1 y if y 0 e x if y 0 a (e 1/ 1) The bootstrap variace is b (e 1/ 1) a Var (W ) Var(W λ X) exp{x(e 1/ 1)}(exp{X(e 1/ 1) } 1) Xb g(a X, b X) Now g is cotiuous, a 1 ad b 0. So by the law of large umbers, Slutsky s theorem, ad the cotiuous mappig theorem Var (W ) Var(W λ) Xg(a X, b X) λg(a λ, b λ) 34 P λg(λ, 0) λg(λ, 0) 1

36 Problem: 1. Let X 1,..., X be a radom sample that may come from a Poisso distributio with mea λ. Fid the sadwich estimator of the asymptotic variace of the MLE λ X.. Let g(x) e x for x > 0 be a expoetial desity with mea oe ad let f(x θ) be a N(θ, 1) desity. Fid the value θ correspodig to the desity of the form f(x θ) that is closest to g i Kullback-Liebler divergece. Solutio: 1. For the Poisso distributio X λ log f(x λ) λ λ λ log f(x λ) X λ So the sadwich estimator of the asymptotic variace of the MLE is Var( ( λ λ) 1 ( 1 ) ( log f(x λ i λ) ) 1 (Xi X) log f(x λ i λ). The Kullback-Liebler divergece from ay distributio with desity g to a N(θ, 1) distributio is KL(g, f) log g(x) f(x θ) g(x)dx cost + 1 (x θ) g(x)dx This is miimized by θ E g [X]; for this particular choice of g this meas θ 1. 35

37 Assigmet 1 Problem (b) Due Friday, April 5, 003. Problem: Cosider the settig of Problem Derive a expressio for log Λ, where Λ is the likelihood ratio test statistic, ad fid the approximate distributio of this quatity uder the ull hypothesis. Due Friday, April 5, 003. Problem Due Friday, April 5, 003. Solutios (b) For the Huber M-estimator ψ( ) k ad ψ( ) k, so η 1/(1+1) 1/ ad the breakdow is 50%. The formula for the breakdow give i this problem is oly applicable to mootoe ψ fuctios. For redescedig ψ fuctios the estimatig equatio eed ot have a uique root. To resolve this oe ca specify that a estimator should be determied usig a local root fidig procedure startig at, say, the sample media. I this case the M-estimator iherits the 50% breakdow of the media. See Huber, pages 53 55, for a more complete discussio. Problem: Cosider the settig of Problem Derive a expressio for log Λ, where Λ is the likelihood ratio test statistic, ad fid the approximate distributio of this quatity uder the ull hypothesis. Solutio: The restricted likelihood correspods to 1 + Beroulli trials with S 1 + S successes ad commo success probability p, so the MLE of p is p (S 1 +S )/( 1 + ). The urestricted likelihood cosists of two idepedet sets of Beroulli trials with success probabilities p 1 ad p, ad the correpsodig MLS s are p 1 S 1 / 1 ad p S /. The likelihood ratio statistic is therefore p S 1+S (1 p) F 1+F ( ) S1 ( ) S ( ) F1 ( ) F p p 1 p 1 p Λ ad p 1 S 1 (1 p 1 ) F 1 p S (1 p ) F p 1 p 1 p 1 1 p ( log Λ S 1 log p 1 p + S log p p + F 1 log 1 p 1 1 p + F log 1 p ) 1 p 36

38 The restricted parameter space uder the ull hypothesis is oe-dimesioal ad the urestricted parameter space is two-dimesioal. Thus uder the ull hypothesis the distributio of log Λ is approximately χ The log likelihood for a radom sample from a Gamma(α, β) distributio is ( log L(β) log Γ(α) α log β + (α 1) 1 ) log Xi X/β So the score fuctio is V (β) log L(β) β ad the Fisher iformatio is [ α I (β) E β X ] β 3 So the score statistic is V (β) I (β) X αβ αβ ( αβ + Xβ ) x αβ β αβ β 3 α β α β X αβ αβ 37

39 Assigmet 13 Problem Due Friday, May, 003. Problem: Let x 1,..., x be costats, ad suppose Y i β 1 (1 e β x i ) + ε i with the ε i idepedet N(0.σ ) ramdom variables. a. Fid the ormal equatios for the least squares estimators of β 1 ad β. b. Suppose β is kow. Fid the least squares estimator for β 1 as a fuctio of the data ad β. Due Friday, May, 003. Problem: Let x 1,..., x be costats, ad suppose Let y be a costat ad let let x satisfy Y i β 1 + β x i + ε i y β 0 + β 1 x that is, x is the value of x at which the mea respose is y. a. Fid the maximum likelihood estimator x of x. b. Use the delta method to fid the approximate samplig distributio of x. Due Friday, May, 003. Solutios This problem should have stated that r is assumed kow. a. The log likelihood for p is log L(p) cost + (r log p + x log(1 p)) 38

40 The first ad secod partial derivatives with respoct to p are So the Fisher iformatio is I (p) r p p r p x 1 p p r p x (1 p) x 1 p r p (1 p) r p (1 p) ad the score test statistic is ( ) (1 p)r + px r 1 p b. The mea is µ r(1 p)/p. The score statsitic ca be writte i terms of the mea as ( ) (1 p)r + px µ x r 1 p µ + µ /r A cofidece iterval is give by { } µ x C µ : µ + µ /r z α/ The edpoits are the solutios to a quatriatic, r(8x + zα/ rz ) ± α/ 16rx + 16x + rzα/ U, L 8r zα/ To use the cotiuity corectio, replace x with x + 1 poit ad x 1 for the lower ed poit. Problem: Let x 1,..., x be costats, ad suppose for the upper ed Y i β 1 (1 e β x i ) + ε i with the ε i idepedet N(0.σ ) ramdom variables. a. Fid the ormal equatios for the least squares estimators of β 1 ad β. b. Suppose β is kow. Fid the least squares estimator for β 1 as a fuctio of the data ad β. Solutio: 39

41 a. The mea respose is µ i (β) β 1 (1 e β x i ). So the partial derivatives are So the ormal equatios are β 1 µ i (β) 1 e β x i β µ i (β) β 1 (1 e β x i )x i β 1 (1 e β x i ) i1 β1(1 e β x i ) x i i1 (1 e β x i )Y i i1 β 1 (1 e β x i )x i Y i b. If β is kow the the least squares estimator for β 1 ca be foud by solvig the first ormal equatio: i1 β 1 i1 (1 e β x i )Y i i1 (1 e β x i) Problem: Let x 1,..., x be costats, ad suppose Y i β 1 + β x i + ε i Let y be a costat ad let let x satisfy y β 0 + β 1 x that is, x is the value of x at which the mea respose is y. a. Fid the maximum likelihood estimator x of x. b. Use the delta method to fid the approximate samplig distributio of x. Solutio: This prolem should have explicitly assumed ormal errors. a. Sice x (y β 1 )/β, the MLE is by MLE ivariace. x y β 1 β b. The partial derivatives of the fuctio g(β 1, β ) (y β 1 )/β are g(β 1, β ) 1 β 1 β g(β 1, β ) y β 1 β β 40

42 So for β 0 the variace of the approximate samplig distributio is Var( x ) gσ (X T X) 1 g T x i x y β 1 1 β β 3 (xi x) + (y β 1 ) β 4 1 β (xi x) + (y β 1 β x) β 4 (xi x) 1 β + (y β 1 β x) β 4 (xi x) So by the delta method x AN(x, Var( x )). The approximatio is reasoably good if β is far from zero, but the actual mea ad variace of x do ot exist. 41

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values

Asymptotics. Hypothesis Testing UMP. Asymptotic Tests and p-values of the secod half Biostatistics 6 - Statistical Iferece Lecture 6 Fial Exam & Practice Problems for the Fial Hyu Mi Kag Apil 3rd, 3 Hyu Mi Kag Biostatistics 6 - Lecture 6 Apil 3rd, 3 / 3 Rao-Blackwell

More information

Statistical Theory MT 2009 Problems 1: Solution sketches

Statistical Theory MT 2009 Problems 1: Solution sketches Statistical Theory MT 009 Problems : Solutio sketches. Which of the followig desities are withi a expoetial family? Explai your reasoig. (a) Let 0 < θ < ad put f(x, θ) = ( θ)θ x ; x = 0,,,... (b) (c) where

More information

Statistical Theory MT 2008 Problems 1: Solution sketches

Statistical Theory MT 2008 Problems 1: Solution sketches Statistical Theory MT 008 Problems : Solutio sketches. Which of the followig desities are withi a expoetial family? Explai your reasoig. a) Let 0 < θ < ad put fx, θ) = θ)θ x ; x = 0,,,... b) c) where α

More information

Solutions: Homework 3

Solutions: Homework 3 Solutios: Homework 3 Suppose that the radom variables Y,...,Y satisfy Y i = x i + " i : i =,..., IID where x,...,x R are fixed values ad ",...," Normal(0, )with R + kow. Fid ˆ = MLE( ). IND Solutio: Observe

More information

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1 EECS564 Estimatio, Filterig, ad Detectio Hwk 2 Sols. Witer 25 4. Let Z be a sigle observatio havig desity fuctio where. p (z) = (2z + ), z (a) Assumig that is a oradom parameter, fid ad plot the maximum

More information

Unbiased Estimation. February 7-12, 2008

Unbiased Estimation. February 7-12, 2008 Ubiased Estimatio February 7-2, 2008 We begi with a sample X = (X,..., X ) of radom variables chose accordig to oe of a family of probabilities P θ where θ is elemet from the parameter space Θ. For radom

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

Lecture 7: Properties of Random Samples

Lecture 7: Properties of Random Samples Lecture 7: Properties of Radom Samples 1 Cotiued From Last Class Theorem 1.1. Let X 1, X,...X be a radom sample from a populatio with mea µ ad variace σ

More information

Lecture 11 and 12: Basic estimation theory

Lecture 11 and 12: Basic estimation theory Lecture ad 2: Basic estimatio theory Sprig 202 - EE 94 Networked estimatio ad cotrol Prof. Kha March 2 202 I. MAXIMUM-LIKELIHOOD ESTIMATORS The maximum likelihood priciple is deceptively simple. Louis

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes.

Direction: This test is worth 250 points. You are required to complete this test within 50 minutes. Term Test October 3, 003 Name Math 56 Studet Number Directio: This test is worth 50 poits. You are required to complete this test withi 50 miutes. I order to receive full credit, aswer each problem completely

More information

Exponential Families and Bayesian Inference

Exponential Families and Bayesian Inference Computer Visio Expoetial Families ad Bayesia Iferece Lecture Expoetial Families A expoetial family of distributios is a d-parameter family f(x; havig the followig form: f(x; = h(xe g(t T (x B(, (. where

More information

Direction: This test is worth 150 points. You are required to complete this test within 55 minutes.

Direction: This test is worth 150 points. You are required to complete this test within 55 minutes. Term Test 3 (Part A) November 1, 004 Name Math 6 Studet Number Directio: This test is worth 10 poits. You are required to complete this test withi miutes. I order to receive full credit, aswer each problem

More information

1.010 Uncertainty in Engineering Fall 2008

1.010 Uncertainty in Engineering Fall 2008 MIT OpeCourseWare http://ocw.mit.edu.00 Ucertaity i Egieerig Fall 2008 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu.terms. .00 - Brief Notes # 9 Poit ad Iterval

More information

Large Sample Theory. Convergence. Central Limit Theorems Asymptotic Distribution Delta Method. Convergence in Probability Convergence in Distribution

Large Sample Theory. Convergence. Central Limit Theorems Asymptotic Distribution Delta Method. Convergence in Probability Convergence in Distribution Large Sample Theory Covergece Covergece i Probability Covergece i Distributio Cetral Limit Theorems Asymptotic Distributio Delta Method Covergece i Probability A sequece of radom scalars {z } = (z 1,z,

More information

IIT JAM Mathematical Statistics (MS) 2006 SECTION A

IIT JAM Mathematical Statistics (MS) 2006 SECTION A IIT JAM Mathematical Statistics (MS) 6 SECTION A. If a > for ad lim a / L >, the which of the followig series is ot coverget? (a) (b) (c) (d) (d) = = a = a = a a + / a lim a a / + = lim a / a / + = lim

More information

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam. Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the

More information

Lecture 3: MLE and Regression

Lecture 3: MLE and Regression STAT/Q SCI 403: Itroductio to Resamplig Methods Sprig 207 Istructor: Ye-Chi Che Lecture 3: MLE ad Regressio 3. Parameters ad Distributios Some distributios are idexed by their uderlyig parameters. Thus,

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Some Basic Cocepts of Statistical Iferece (Sec 5.) Suppose we have a rv X that has a pdf/pmf deoted by f(x; θ) or p(x; θ), where θ is called the parameter. I previous lectures, we focus o probability problems

More information

APPM 5720 Solutions to Problem Set Five

APPM 5720 Solutions to Problem Set Five APPM 5720 Solutios to Problem Set Five 1. The pdf is f(x; α, β) 1 Γ(α) βα x α 1 e βx I (0, ) (x). The joit pdf is f( x; α, β) 1 [Γ(α)] β α ( i1 x i ) α 1 e β i1 x i i1 I (0, ) (x i ) 1 [Γ(α)] βα } {{ }

More information

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample. Statistical Iferece (Chapter 10) Statistical iferece = lear about a populatio based o the iformatio provided by a sample. Populatio: The set of all values of a radom variable X of iterest. Characterized

More information

Last Lecture. Unbiased Test

Last Lecture. Unbiased Test Last Lecture Biostatistics 6 - Statistical Iferece Lecture Uiformly Most Powerful Test Hyu Mi Kag March 8th, 3 What are the typical steps for costructig a likelihood ratio test? Is LRT statistic based

More information

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS Jauary 25, 207 INTRODUCTION TO MATHEMATICAL STATISTICS Abstract. A basic itroductio to statistics assumig kowledge of probability theory.. Probability I a typical udergraduate problem i probability, we

More information

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED

MATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED MATH 47 / SPRING 013 ASSIGNMENT : DUE FEBRUARY 4 FINALIZED Please iclude a cover sheet that provides a complete setece aswer to each the followig three questios: (a) I your opiio, what were the mai ideas

More information

Summary. Recap. Last Lecture. Let W n = W n (X 1,, X n ) = W n (X) be a sequence of estimators for

Summary. Recap. Last Lecture. Let W n = W n (X 1,, X n ) = W n (X) be a sequence of estimators for Last Lecture Biostatistics 602 - Statistical Iferece Lecture 17 Asymptotic Evaluatio of oit Estimators Hyu Mi Kag March 19th, 2013 What is a Bayes Risk? What is the Bayes rule Estimator miimizig square

More information

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.

Resampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n. Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

Last Lecture. Biostatistics Statistical Inference Lecture 16 Evaluation of Bayes Estimator. Recap - Example. Recap - Bayes Estimator

Last Lecture. Biostatistics Statistical Inference Lecture 16 Evaluation of Bayes Estimator. Recap - Example. Recap - Bayes Estimator Last Lecture Biostatistics 60 - Statistical Iferece Lecture 16 Evaluatio of Bayes Estimator Hyu Mi Kag March 14th, 013 What is a Bayes Estimator? Is a Bayes Estimator the best ubiased estimator? Compared

More information

x = Pr ( X (n) βx ) =

x = Pr ( X (n) βx ) = Exercise 93 / page 45 The desity of a variable X i i 1 is fx α α a For α kow let say equal to α α > fx α α x α Pr X i x < x < Usig a Pivotal Quatity: x α 1 < x < α > x α 1 ad We solve i a similar way as

More information

Last Lecture. Wald Test

Last Lecture. Wald Test Last Lecture Biostatistics 602 - Statistical Iferece Lecture 22 Hyu Mi Kag April 9th, 2013 Is the exact distributio of LRT statistic typically easy to obtai? How about its asymptotic distributio? For testig

More information

Mathematical Statistics - MS

Mathematical Statistics - MS Paper Specific Istructios. The examiatio is of hours duratio. There are a total of 60 questios carryig 00 marks. The etire paper is divided ito three sectios, A, B ad C. All sectios are compulsory. Questios

More information

Stat 319 Theory of Statistics (2) Exercises

Stat 319 Theory of Statistics (2) Exercises Kig Saud Uiversity College of Sciece Statistics ad Operatios Research Departmet Stat 39 Theory of Statistics () Exercises Refereces:. Itroductio to Mathematical Statistics, Sixth Editio, by R. Hogg, J.

More information

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2

( θ. sup θ Θ f X (x θ) = L. sup Pr (Λ (X) < c) = α. x : Λ (x) = sup θ H 0. sup θ Θ f X (x θ) = ) < c. NH : θ 1 = θ 2 against AH : θ 1 θ 2 82 CHAPTER 4. MAXIMUM IKEIHOOD ESTIMATION Defiitio: et X be a radom sample with joit p.m/d.f. f X x θ. The geeralised likelihood ratio test g.l.r.t. of the NH : θ H 0 agaist the alterative AH : θ H 1,

More information

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett

Lecture Note 8 Point Estimators and Point Estimation Methods. MIT Spring 2006 Herman Bennett Lecture Note 8 Poit Estimators ad Poit Estimatio Methods MIT 14.30 Sprig 2006 Herma Beett Give a parameter with ukow value, the goal of poit estimatio is to use a sample to compute a umber that represets

More information

6. Sufficient, Complete, and Ancillary Statistics

6. Sufficient, Complete, and Ancillary Statistics Sufficiet, Complete ad Acillary Statistics http://www.math.uah.edu/stat/poit/sufficiet.xhtml 1 of 7 7/16/2009 6:13 AM Virtual Laboratories > 7. Poit Estimatio > 1 2 3 4 5 6 6. Sufficiet, Complete, ad Acillary

More information

An Introduction to Asymptotic Theory

An Introduction to Asymptotic Theory A Itroductio to Asymptotic Theory Pig Yu School of Ecoomics ad Fiace The Uiversity of Hog Kog Pig Yu (HKU) Asymptotic Theory 1 / 20 Five Weapos i Asymptotic Theory Five Weapos i Asymptotic Theory Pig Yu

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Summary. Recap ... Last Lecture. Summary. Theorem

Summary. Recap ... Last Lecture. Summary. Theorem Last Lecture Biostatistics 602 - Statistical Iferece Lecture 23 Hyu Mi Kag April 11th, 2013 What is p-value? What is the advatage of p-value compared to hypothesis testig procedure with size α? How ca

More information

Lecture 12: September 27

Lecture 12: September 27 36-705: Itermediate Statistics Fall 207 Lecturer: Siva Balakrisha Lecture 2: September 27 Today we will discuss sufficiecy i more detail ad the begi to discuss some geeral strategies for costructig estimators.

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)]. Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x

More information

Distribution of Random Samples & Limit theorems

Distribution of Random Samples & Limit theorems STAT/MATH 395 A - PROBABILITY II UW Witer Quarter 2017 Néhémy Lim Distributio of Radom Samples & Limit theorems 1 Distributio of i.i.d. Samples Motivatig example. Assume that the goal of a study is to

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 016 MODULE : Statistical Iferece Time allowed: Three hours Cadidates should aswer FIVE questios. All questios carry equal marks. The umber

More information

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain

Since X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain Assigmet 9 Exercise 5.5 Let X biomial, p, where p 0, 1 is ukow. Obtai cofidece itervals for p i two differet ways: a Sice X / p d N0, p1 p], the variace of the limitig distributio depeds oly o p. Use the

More information

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15

17. Joint distributions of extreme order statistics Lehmann 5.1; Ferguson 15 17. Joit distributios of extreme order statistics Lehma 5.1; Ferguso 15 I Example 10., we derived the asymptotic distributio of the maximum from a radom sample from a uiform distributio. We did this usig

More information

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2.

The variance of a sum of independent variables is the sum of their variances, since covariances are zero. Therefore. V (xi )= n n 2 σ2 = σ2. SAMPLE STATISTICS A radom sample x 1,x,,x from a distributio f(x) is a set of idepedetly ad idetically variables with x i f(x) for all i Their joit pdf is f(x 1,x,,x )=f(x 1 )f(x ) f(x )= f(x i ) The sample

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

Lecture 33: Bootstrap

Lecture 33: Bootstrap Lecture 33: ootstrap Motivatio To evaluate ad compare differet estimators, we eed cosistet estimators of variaces or asymptotic variaces of estimators. This is also importat for hypothesis testig ad cofidece

More information

STAT431 Review. X = n. n )

STAT431 Review. X = n. n ) STAT43 Review I. Results related to ormal distributio Expected value ad variace. (a) E(aXbY) = aex bey, Var(aXbY) = a VarX b VarY provided X ad Y are idepedet. Normal distributios: (a) Z N(, ) (b) X N(µ,

More information

Introductory statistics

Introductory statistics CM9S: Machie Learig for Bioiformatics Lecture - 03/3/06 Itroductory statistics Lecturer: Sriram Sakararama Scribe: Sriram Sakararama We will provide a overview of statistical iferece focussig o the key

More information

TAMS24: Notations and Formulas

TAMS24: Notations and Formulas TAMS4: Notatios ad Formulas Basic otatios ad defiitios X: radom variable stokastiska variabel Mea Vätevärde: µ = X = by Xiagfeg Yag kpx k, if X is discrete, xf Xxdx, if X is cotiuous Variace Varias: =

More information

2. The volume of the solid of revolution generated by revolving the area bounded by the

2. The volume of the solid of revolution generated by revolving the area bounded by the IIT JAM Mathematical Statistics (MS) Solved Paper. A eigevector of the matrix M= ( ) is (a) ( ) (b) ( ) (c) ( ) (d) ( ) Solutio: (a) Eigevalue of M = ( ) is. x So, let x = ( y) be the eigevector. z (M

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

Lecture 18: Sampling distributions

Lecture 18: Sampling distributions Lecture 18: Samplig distributios I may applicatios, the populatio is oe or several ormal distributios (or approximately). We ow study properties of some importat statistics based o a radom sample from

More information

This section is optional.

This section is optional. 4 Momet Geeratig Fuctios* This sectio is optioal. The momet geeratig fuctio g : R R of a radom variable X is defied as g(t) = E[e tx ]. Propositio 1. We have g () (0) = E[X ] for = 1, 2,... Proof. Therefore

More information

Lecture 23: Minimal sufficiency

Lecture 23: Minimal sufficiency Lecture 23: Miimal sufficiecy Maximal reductio without loss of iformatio There are may sufficiet statistics for a give problem. I fact, X (the whole data set) is sufficiet. If T is a sufficiet statistic

More information

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences.

TMA4245 Statistics. Corrected 30 May and 4 June Norwegian University of Science and Technology Department of Mathematical Sciences. Norwegia Uiversity of Sciece ad Techology Departmet of Mathematical Scieces Corrected 3 May ad 4 Jue Solutios TMA445 Statistics Saturday 6 May 9: 3: Problem Sow desity a The probability is.9.5 6x x dx

More information

Estimation for Complete Data

Estimation for Complete Data Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of

More information

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn

Review Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn Stat 366 Lab 2 Solutios (September 2, 2006) page TA: Yury Petracheko, CAB 484, yuryp@ualberta.ca, http://www.ualberta.ca/ yuryp/ Review Questios, Chapters 8, 9 8.5 Suppose that Y, Y 2,..., Y deote a radom

More information

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors

ECONOMETRIC THEORY. MODULE XIII Lecture - 34 Asymptotic Theory and Stochastic Regressors ECONOMETRIC THEORY MODULE XIII Lecture - 34 Asymptotic Theory ad Stochastic Regressors Dr. Shalabh Departmet of Mathematics ad Statistics Idia Istitute of Techology Kapur Asymptotic theory The asymptotic

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit Theorems Throughout this sectio we will assume a probability space (, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Lecture Stat Maximum Likelihood Estimation

Lecture Stat Maximum Likelihood Estimation Lecture Stat 461-561 Maximum Likelihood Estimatio A.D. Jauary 2008 A.D. () Jauary 2008 1 / 63 Maximum Likelihood Estimatio Ivariace Cosistecy E ciecy Nuisace Parameters A.D. () Jauary 2008 2 / 63 Parametric

More information

LECTURE NOTES 9. 1 Point Estimation. 1.1 The Method of Moments

LECTURE NOTES 9. 1 Point Estimation. 1.1 The Method of Moments LECTURE NOTES 9 Poit Estimatio Uder the hypothesis that the sample was geerated from some parametric statistical model, a atural way to uderstad the uderlyig populatio is by estimatig the parameters of

More information

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara

Econ 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara Poit Estimator Eco 325 Notes o Poit Estimator ad Cofidece Iterval 1 By Hiro Kasahara Parameter, Estimator, ad Estimate The ormal probability desity fuctio is fully characterized by two costats: populatio

More information

4. Partial Sums and the Central Limit Theorem

4. Partial Sums and the Central Limit Theorem 1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems

More information

STAT Homework 1 - Solutions

STAT Homework 1 - Solutions STAT-36700 Homework 1 - Solutios Fall 018 September 11, 018 This cotais solutios for Homework 1. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better

More information

Homework for 2/3. 1. Determine the values of the following quantities: a. t 0.1,15 b. t 0.05,15 c. t 0.1,25 d. t 0.05,40 e. t 0.

Homework for 2/3. 1. Determine the values of the following quantities: a. t 0.1,15 b. t 0.05,15 c. t 0.1,25 d. t 0.05,40 e. t 0. Name: ID: Homework for /3. Determie the values of the followig quatities: a. t 0.5 b. t 0.055 c. t 0.5 d. t 0.0540 e. t 0.00540 f. χ 0.0 g. χ 0.0 h. χ 0.00 i. χ 0.0050 j. χ 0.990 a. t 0.5.34 b. t 0.055.753

More information

Probability and statistics: basic terms

Probability and statistics: basic terms Probability ad statistics: basic terms M. Veeraraghava August 203 A radom variable is a rule that assigs a umerical value to each possible outcome of a experimet. Outcomes of a experimet form the sample

More information

STATISTICAL INFERENCE

STATISTICAL INFERENCE STATISTICAL INFERENCE POPULATION AND SAMPLE Populatio = all elemets of iterest Characterized by a distributio F with some parameter θ Sample = the data X 1,..., X, selected subset of the populatio = sample

More information

Tests of Hypotheses Based on a Single Sample (Devore Chapter Eight)

Tests of Hypotheses Based on a Single Sample (Devore Chapter Eight) Tests of Hypotheses Based o a Sigle Sample Devore Chapter Eight MATH-252-01: Probability ad Statistics II Sprig 2018 Cotets 1 Hypothesis Tests illustrated with z-tests 1 1.1 Overview of Hypothesis Testig..........

More information

of the matrix is =-85, so it is not positive definite. Thus, the first

of the matrix is =-85, so it is not positive definite. Thus, the first BOSTON COLLEGE Departmet of Ecoomics EC771: Ecoometrics Sprig 4 Prof. Baum, Ms. Uysal Solutio Key for Problem Set 1 1. Are the followig quadratic forms positive for all values of x? (a) y = x 1 8x 1 x

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 6 Sequeces ad Series of Fuctios 6.1. Covergece of a Sequece of Fuctios Poitwise Covergece. Defiitio 6.1. Let, for each N, fuctio f : A R be defied. If, for each x A, the sequece (f (x)) coverges

More information

HOMEWORK I: PREREQUISITES FROM MATH 727

HOMEWORK I: PREREQUISITES FROM MATH 727 HOMEWORK I: PREREQUISITES FROM MATH 727 Questio. Let X, X 2,... be idepedet expoetial radom variables with mea µ. (a) Show that for Z +, we have EX µ!. (b) Show that almost surely, X + + X (c) Fid the

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation ECE 645: Estimatio Theory Sprig 2015 Istructor: Prof. Staley H. Cha Maximum Likelihood Estimatio (LaTeX prepared by Shaobo Fag) April 14, 2015 This lecture ote is based o ECE 645(Sprig 2015) by Prof. Staley

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013 Large Deviatios for i.i.d. Radom Variables Cotet. Cheroff boud usig expoetial momet geeratig fuctios. Properties of a momet

More information

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures

FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING. Lectures FACULTY OF MATHEMATICAL STUDIES MATHEMATICS FOR PART I ENGINEERING Lectures MODULE 5 STATISTICS II. Mea ad stadard error of sample data. Biomial distributio. Normal distributio 4. Samplig 5. Cofidece itervals

More information

Generalized Semi- Markov Processes (GSMP)

Generalized Semi- Markov Processes (GSMP) Geeralized Semi- Markov Processes (GSMP) Summary Some Defiitios Markov ad Semi-Markov Processes The Poisso Process Properties of the Poisso Process Iterarrival times Memoryless property ad the residual

More information

Lecture 20: Multivariate convergence and the Central Limit Theorem

Lecture 20: Multivariate convergence and the Central Limit Theorem Lecture 20: Multivariate covergece ad the Cetral Limit Theorem Covergece i distributio for radom vectors Let Z,Z 1,Z 2,... be radom vectors o R k. If the cdf of Z is cotiuous, the we ca defie covergece

More information

Simulation. Two Rule For Inverting A Distribution Function

Simulation. Two Rule For Inverting A Distribution Function Simulatio Two Rule For Ivertig A Distributio Fuctio Rule 1. If F(x) = u is costat o a iterval [x 1, x 2 ), the the uiform value u is mapped oto x 2 through the iversio process. Rule 2. If there is a jump

More information

Questions and Answers on Maximum Likelihood

Questions and Answers on Maximum Likelihood Questios ad Aswers o Maximum Likelihood L. Magee Fall, 2008 1. Give: a observatio-specific log likelihood fuctio l i (θ) = l f(y i x i, θ) the log likelihood fuctio l(θ y, X) = l i(θ) a data set (x i,

More information

EFFECTIVE WLLN, SLLN, AND CLT IN STATISTICAL MODELS

EFFECTIVE WLLN, SLLN, AND CLT IN STATISTICAL MODELS EFFECTIVE WLLN, SLLN, AND CLT IN STATISTICAL MODELS Ryszard Zieliński Ist Math Polish Acad Sc POBox 21, 00-956 Warszawa 10, Polad e-mail: rziel@impagovpl ABSTRACT Weak laws of large umbers (W LLN), strog

More information

ECO 312 Fall 2013 Chris Sims LIKELIHOOD, POSTERIORS, DIAGNOSING NON-NORMALITY

ECO 312 Fall 2013 Chris Sims LIKELIHOOD, POSTERIORS, DIAGNOSING NON-NORMALITY ECO 312 Fall 2013 Chris Sims LIKELIHOOD, POSTERIORS, DIAGNOSING NON-NORMALITY (1) A distributio that allows asymmetry differet probabilities for egative ad positive outliers is the asymmetric double expoetial,

More information

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering

CEE 522 Autumn Uncertainty Concepts for Geotechnical Engineering CEE 5 Autum 005 Ucertaity Cocepts for Geotechical Egieerig Basic Termiology Set A set is a collectio of (mutually exclusive) objects or evets. The sample space is the (collectively exhaustive) collectio

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak

More information

Lecture 16: UMVUE: conditioning on sufficient and complete statistics

Lecture 16: UMVUE: conditioning on sufficient and complete statistics Lecture 16: UMVUE: coditioig o sufficiet ad complete statistics The 2d method of derivig a UMVUE whe a sufficiet ad complete statistic is available Fid a ubiased estimator of ϑ, say U(X. Coditioig o a

More information

Lecture 3. Properties of Summary Statistics: Sampling Distribution

Lecture 3. Properties of Summary Statistics: Sampling Distribution Lecture 3 Properties of Summary Statistics: Samplig Distributio Mai Theme How ca we use math to justify that our umerical summaries from the sample are good summaries of the populatio? Lecture Summary

More information

Approximations and more PMFs and PDFs

Approximations and more PMFs and PDFs Approximatios ad more PMFs ad PDFs Saad Meimeh 1 Approximatio of biomial with Poisso Cosider the biomial distributio ( b(k,,p = p k (1 p k, k λ: k Assume that is large, ad p is small, but p λ at the limit.

More information

Bayesian Methods: Introduction to Multi-parameter Models

Bayesian Methods: Introduction to Multi-parameter Models Bayesia Methods: Itroductio to Multi-parameter Models Parameter: θ = ( θ, θ) Give Likelihood p(y θ) ad prior p(θ ), the posterior p proportioal to p(y θ) x p(θ ) Margial posterior ( θ, θ y) is Iterested

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5 CS434a/54a: Patter Recogitio Prof. Olga Veksler Lecture 5 Today Itroductio to parameter estimatio Two methods for parameter estimatio Maimum Likelihood Estimatio Bayesia Estimatio Itroducto Bayesia Decisio

More information

November 2002 Course 4 solutions

November 2002 Course 4 solutions November Course 4 solutios Questio # Aswer: B φ ρ = = 5. φ φ ρ = φ + =. φ Solvig simultaeously gives: φ = 8. φ = 6. Questio # Aswer: C g = [(.45)] = [5.4] = 5; h= 5.4 5 =.4. ˆ π =.6 x +.4 x =.6(36) +.4(4)

More information

Chapter 6 Sampling Distributions

Chapter 6 Sampling Distributions Chapter 6 Samplig Distributios 1 I most experimets, we have more tha oe measuremet for ay give variable, each measuremet beig associated with oe radomly selected a member of a populatio. Hece we eed to

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit theorems Throughout this sectio we will assume a probability space (Ω, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Mathematics 170B Selected HW Solutions.

Mathematics 170B Selected HW Solutions. Mathematics 17B Selected HW Solutios. F 4. Suppose X is B(,p). (a)fidthemometgeeratigfuctiom (s)of(x p)/ p(1 p). Write q = 1 p. The MGF of X is (pe s + q), sice X ca be writte as the sum of idepedet Beroulli

More information

Limit Theorems. Convergence in Probability. Let X be the number of heads observed in n tosses. Then, E[X] = np and Var[X] = np(1-p).

Limit Theorems. Convergence in Probability. Let X be the number of heads observed in n tosses. Then, E[X] = np and Var[X] = np(1-p). Limit Theorems Covergece i Probability Let X be the umber of heads observed i tosses. The, E[X] = p ad Var[X] = p(-p). L O This P x p NM QP P x p should be close to uity for large if our ituitio is correct.

More information

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d

Linear regression. Daniel Hsu (COMS 4771) (y i x T i β)2 2πσ. 2 2σ 2. 1 n. (x T i β y i ) 2. 1 ˆβ arg min. β R n d Liear regressio Daiel Hsu (COMS 477) Maximum likelihood estimatio Oe of the simplest liear regressio models is the followig: (X, Y ),..., (X, Y ), (X, Y ) are iid radom pairs takig values i R d R, ad Y

More information

STAT Homework 7 - Solutions

STAT Homework 7 - Solutions STAT-36700 Homework 7 - Solutios Fall 208 October 28, 208 This cotais solutios for Homework 7. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better

More information

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)

Goodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen) Goodess-of-Fit Tests ad Categorical Data Aalysis (Devore Chapter Fourtee) MATH-252-01: Probability ad Statistics II Sprig 2019 Cotets 1 Chi-Squared Tests with Kow Probabilities 1 1.1 Chi-Squared Testig................

More information

EE 4TM4: Digital Communications II Probability Theory

EE 4TM4: Digital Communications II Probability Theory 1 EE 4TM4: Digital Commuicatios II Probability Theory I. RANDOM VARIABLES A radom variable is a real-valued fuctio defied o the sample space. Example: Suppose that our experimet cosists of tossig two fair

More information