MLE and efficiency 23. P (X = x) = θx Let s try to find the MLE for θ. A random sample drawn from this distribution has the likelihood function
|
|
- Ethelbert Paul
- 6 years ago
- Views:
Transcription
1 3. Maximum likelihood estimators ad efficiecy 3.1. Maximum likelihood estimators. Let X 1,..., X be a radom sample, draw from a distributio P θ that depeds o a ukow parameter θ. We are lookig for a geeral method to produce a statistic T = T (X 1,..., X ) that (we hope) will be a reasoable estimator for θ. Oe possible aswer is the maximum likelihood method. Suppose I observed the values x 1,..., x. Before the experimet, the probability that exactly these values would occur was P θ (X 1 = x 1,..., X = x ), ad this will deped o θ. Sice I did observe these values, maybe it s a good idea to look for a θ that maximizes this probability (which, to impress the uiitiated, we ow call likelihood). Please do ot cofuse this maximizatio with the futile attempt to fid the θ that is ow most likely, give what I just observed. I really maximize over the coditio: give that θ has some cocrete value, we ca work out the probability that what I observed occurred, ad this is what I maximize. Exercise 3.1. Please elaborate. Ca you also make it plausible that there are (artificial) examples where the MLE is i fact quite likely to produce a estimate that is hopelessly off target? Defiitio 3.1. We call a statistic θ = θ(x 1,..., X ) a maximum likelihood estimator for θ if P θ (X 1 = x 1,..., X = x ) is maximal at θ = θ(x 1,..., x ). There is, i geeral, o guaratee that this maximum exists or (if it does) is uique, but we ll igore this potetial problem ad just hope for the best. Also, observe that if we take the defiitio apart very carefully, we discover a certai amout of jugglig aroud with argumets of fuctios: the MLE θ is a statistic, that is, a radom variable that is a fuctio of the radom sample, but the maximizig value of the parameter is obtaied by replacig the X j by their observed values x j. Alteratively, we could say that we cosider the likelihood fuctio L(x 1,..., x ) = P (X 1 = x 1,..., X = x ), the plug the radom variables X j ito their ow likelihood fuctio ad fially maximize, which the produces a maximizer that is a radom variable itself (ad i fact a statistic). Noe of this matters a whole lot right ow; we ll ecouter this curious procedure (plug radom variables ito fuctios obtaied from their ow distributio) agai i the ext sectio. Example 3.1. Let s retur to the coi flip example: P (X 1 = 1) = θ, P (X 1 = ) = 1 θ, ad here it s coveiet to combie this ito oe 22
2 MLE ad efficiecy 23 formula by writig P (X 1 = x) = θ x (1 θ) 1 x, for x =, 1. Thus P (X 1 = x 1,..., X = x ) = θ x j (1 θ) x j. We are lookig for the θ that maximizes this expressio. Take the θ derivative ad set this equal to zero. Also, let s abbreviate S = x j. Sθ S 1 (1 θ) S ( S)θ S (1 θ) S 1 = or S(1 θ) ( S)θ =, ad this has the solutio θ = S/. (We d ow have to check that this is ideed a maximum, but we skip this part.) So the MLE for this distributio is give by θ = T = X. It is reassurig that this obvious choice ow receives some theoretical justificatio. We kow that this estimator is ubiased. I geeral, however, MLEs ca be biased. To see this, let s retur to aother example that was discussed earlier. Example 3.2. Cosider agai the ur with a ukow umber N = θ of balls i it, labeled 1,..., N. We form a radom sample X 1,..., X by drawig times, with replacemet, accordig to the distributio P (X 1 = x) = (1/N)χ 1,...,N (x). For fixed x 1,..., x 1, the probability of observig this outcome is the give by { N max x j N (3.1) P (X 1 = x 1,..., X = x ) = max x j > N. We wat to fid the MLE, so we are tryig to maximize this over N, for fixed x 1,..., x. Clearly, eterig the secod lie of (3.1) is o good, so we must take N max x j. For ay such N, the quatity we re tryig to maximize equals N, so we get the largest possible value by takig the smallest N that is still allowed. I other words, the MLE is give by N = max X j. We kow that this estimator is ot ubiased. Agai, it is ice to see some theoretical justificatio emergig for a estimator that looked reasoable. Example 3.3. Recall that the Poisso distributio with parameter θ > is give by P (X = x) = θx x! e θ, (x =, 1, 2,...). Let s try to fid the MLE for θ. A radom sample draw from this distributio has the likelihood fuctio P (X 1 = x 1,..., X = x ) = θx x x 1! x! e θ.
3 24 Christia Remlig We wat to maximize this with respect to θ, so we ca igore the deomiator, which does ot deped o θ. Let s agai write S = x j ; we the wat to maximize θ S e θ. This leads to or θ = S/, that is θ = X. Sθ S 1 e θ θ S e θ = Exercise 3.2. Show that EX = θ if X is Poisso distributed with parameter θ. Coclude that the MLE is ubiased. For radom samples draw from cotiuous distributios, the above recipe caot literally be applied because P (X 1 = x 1,..., X = x ) = always i this situatio. However, we ca modify it as follows: call a statistic θ a MLE for θ if θ(x 1,..., x ) maximizes the (joit) desity f X1,...,X (x 1,..., x ; θ) = f(x 1 ; θ)f(x 2 ; θ) f(x ; θ), for all possible values x j of the radom sample. I aalogy to our termiology i the discrete case, we will agai refer to this product of the desities as the likelihood fuctio. Example 3.4. Cosider the expoetial distributio with parameter θ; this is the distributio with desity (3.2) f(x) = e x/θ (x ), θ ad f(x) = for x <. Let s first fid EX for a expoetially distributed radom variable X: EX = 1 xe x/θ dx = xe x/θ + e x/θ dx = θ, θ by a itegratio by parts i the first step. (So it is atural to use θ as the parameter, rather tha 1/θ.) To fid the MLE for θ, we have to maximize θ e S/θ (writig, as usual, S = x j ). This gives θ 1 e S/θ + S θ 2 θ e S/θ = or θ = S/, that is, as a statistic, θ = X (agai...). This MLE is ubiased. What would have happeed if he had used η = 1/θ i (3.2) istead, to avoid the reciprocals? So f(x) = ηe ηx for x, ad I ow wat to fid the MLE η for η. I other words, I wat to maximize η e ηs, ad proceedig as above, we fid that this happes at η = /S or η = 1/X. Now recall that η = 1/θ, ad the MLE for θ was θ = X. This is o coicidece; essetially, we solved the same maximizatio problem
4 MLE ad efficiecy 25 twice, with slightly chaged otatio the secod time. I geeral, we have the followig (almost tautological) statemet: Theorem 3.2. Cosider parameters η, θ that parametrize the same distributio. Suppose that they are related by η = g(θ), for a bijective g. The, if θ is a MLE for θ, the η = g( θ) is a MLE for η. Exercise 3.3. Give a somewhat more explicit versio of the argumet suggested above. Notice, however, that the MLE estimator is o loger ubiased after the trasformatio. This could be checked rather quickly by a idirect argumet, but it is also possible to work thigs out explicitly. To get this started, let s first look at the distributio of the sum S 2 = X 1 + X 2 two idepedet expoetially distributed radom variables X 1, X 2. We kow that the desity of S 2 is the covolutio of the desity from (3.2) with itself: f 2 (x) = 1 x e t/θ e (x t)/θ dt = 1 θ 2 θ 2 xe x/θ Next, if we add oe more idepedet radom variable with this distributio, that is, if we cosider S 3 = S 2 + X 3, the the desity of S 3 ca be obtaied as the covolutio of f 2 with the desity f from (3.2), so f 3 (x) = 1 x te t/θ e (t x)/θ dt = 1 θ 3 2θ 3 x2 e x/θ. Cotiuig i this style, we fid that f (x) = 1 ( 1)!θ x 1 e x/θ. Exercise 3.4. Deote the desity of S = S by f. Show that the S/ has desity f(x) = f (x). Sice X = S/, the Exercise i particular says that X has desity (3.3) f(x) = ( 1)!θ (x) 1 e x/θ (x ). This is already quite iterestig, but let s keep goig. We were origially iterested i Y = 1/X, the MLE for η = 1/θ. We apply the usual techique to trasform the desities: P (Y y) = P (X 1/y) = 1/y f(x) dx,
5 26 Christia Remlig ad sice g = f Y that ca be obtaied as the y derivative of this, we see (3.4) g(y) = 1 y 2 f(1/y) = ( 1)!θ y 2 (/y) 1 e /(θy) (y > ). This gives EY = = yg(y) dy = ( 1)!θ ( 1)!θ t 2 e t dt. y 1 ( y ) 1 e /(θy) dy We have used the substitutio t = /(θy) to pass to the secod lie. The itegral ca be evaluated by repeated itegratio by parts, or, somewhat more elegatly, you recogize it as Γ( 1) = ( 2)!. So, puttig thigs together, it follows that E(1/X) = ( 1)θ = 1 η. I particular, Y = 1/X is ot a ubiased estimator for η; we are off by the factor /( 1) > 1 (which, however, is very close to 1 for large ). Exercise 3.5. Check oe more time that X is a ubiased estimator for θ, this time by makig use of the desity f from (3.3) to compute EX (i a admittedly rather clumsy way). You ca agai use the fact that Γ(k) = (k 1)! for k = 1, 2,.... Example 3.5. Cosider the uiform distributio o [, θ]: { 1/θ x θ f(x) = otherwise We would like to fid the MLE for θ. We the eed to maximize with respect to θ (for give x 1,..., x ) the likelihood fuctio { θ max x j θ f(x 1 ) f(x ) = max x j > θ. This first of all forces us to take θ max x j, to eter the first lie, ad the θ as small as (still) possible, to maximize θ. Thus θ = max(x 1,..., X ). This estimator is ot ubiased. Exercise 3.6. Why? This whole example is a exact (cotiuous) aalog of its discrete versio Example 3.2.
6 MLE ad efficiecy 27 Example 3.6. Fially, let s take a look at the ormal distributio. Let s first fid the MLE for θ = σ 2, for a ormal distributio with kow µ. We the eed to maximize θ /2 e A/θ, A = (x j µ) 2. 2 This gives (/2)/θ + A/θ 2 = or θ = 2A/, that is, 1 (3.5) θ = (X j µ) 2. Exercise 3.7. (a) Show that θ/σ 2 χ 2 (). (b) Coclude that θ is ubiased. By Theorem 3.2, the MLE for σ is the give by 1 σ = (Xj µ) 2. This estimator is ot ubiased. What if µ ad σ are both ukow? There is a obvious way to adapt our procedure: we ca maximize over both parameters simultaeously to obtai two statistics that ca serve as MLE style estimators. So we ow wat to maximize ( ) θ /2 exp 1 (x j µ) 2 2θ over both µ ad θ. This leads to the two coditios 2θ + 1 (x 2θ 2 j µ) 2 =, (x j µ) =. The secod equatio says that µ = (1/) x j =: x, ad the, by repeatig the calculatio from above, we see from this ad the first equatio that θ = (1/) (x j x) 2. I other words, 1 µ = X, θ = (X j X) 2 = 1 S2. So µ is ubiased, but θ is ot sice ES 2 = σ 2 = θ, so E θ = (( 1)/)θ. Exercise 3.8. Fid the MLE for θ for the followig desities: (a) f(x) = θx θ 1 for < x < 1, ad f(x) = otherwise, ad θ > ; (b) f(x) = e θ x for x θ ad f(x) = otherwise
7 28 Christia Remlig Exercise 3.9. Here s a example where the maximizatio does ot produce a uique value. Cosider the desity f(x) = (1/2)e x θ. Assume for coveiece that = 2k is eve ad cosider data x 1 < x 2 <... < x. The show that ay θ i the iterval x k < θ < x k+1 maximizes the likelihood fuctio. Exercise 3.1. (a) Show that f(x, θ) = 1 θ 2 xe x/θ (x ) (ad f(x) = for x < ) is a desity for θ >. (b) Fid the MLE θ for θ. (c) Show that θ is ubiased Cramer-Rao bouds. If a estimator is ubiased, it delivers the correct value at least o average. It would the be ice if this estimator showed oly little variatio about this correct value (of course, if T is biased, it is less clear if little variatio about the icorrect value is a good thig). Let s take aother look at our favorite example from this poit of view. So P (X 1 = 1) = θ, P (X 1 = ) = 1 θ, ad we are goig to use the MLE T = θ = X. Sice the X j are idepedet, the variaces add up ad thus Var(T ) = 1 Var(X θ(1 θ) 1) = 2 ad σ T = θ(1 θ)/ 1/(2 ). This does t look too bad. I particular, for large radom samples, it gets small; it decays at the rate σ T 1/. Could we perhaps do better tha this with a differet ubiased estimator? It turs out that this is ot the case. The statistic T = X is optimal i this example i the sese that it has the smallest possible variace amog all ubiased estimators. We ow derive such a result i a geeral settig. Let f(x, θ) be a desity that depeds o the parameter θ. We will assume throughout this sectio that f is sufficietly well behaved so that the followig maipulatios are justified, without actually makig explicit a precise versio of such assumptios. We will certaily eed f to be twice differetiable with respect to θ sice we will take this secod derivative, but this o its ow is ot sufficiet to justify some of the other steps (such as differetiatig uder the itegral sig). We have that f dx = 1, so by takig the θ derivative (ad iterchagig differetiatio ad itegral), we obtai that f/ θ dx =.
8 MLE ad efficiecy 29 This we may rewrite as (3.6) f(x, θ) l f(x, θ) dx =. θ There are potetial problems here with regios where f = ; to avoid these, I will simply iterpret (3.6) as a itegral over oly those parts of the real lie where f >. (To make sure that the argumet leadig to (3.6) is still justified i this settig, we should really make the additioal assumptio that {x : f(x, θ) > } does ot deped o θ, but we ll igore purely techical poits of this kid.) A alterative readig of (3.6) is E( / θ) l f(x, θ) =. Here (ad below) I use the geeral fact that Eg(X) = g(x)f(x) dx for ay fuctio g. Also ote the somewhat curious costructio here: we plug the radom variable X ito its ow desity (ad the take the logarithm) to produce the ew radom variable l f(x) (which also depeds o θ). If we take oe more derivative, the (3.6) becomes (3.7) ( ) 2 f(x, θ) 2 l f(x, θ) dx + f(x, θ) l f(x, θ) dx =. θ2 θ Defiitio 3.3. The Fisher iformatio is defied as I(θ) = E ( ) 2 l f(x, θ). θ This assumes that X is a cotiuous radom variable; i the discrete case, we replace f by P (X = x, θ) (ad agai plug X ito its ow distributio). From (3.7), we obtai the alterative formula (3.8) I(θ) = E 2 l f(x, θ); θ2 moreover, it is also true that (3.9) I(θ) = Var(( / θ) l f(x, θ)). Example 3.7. Let s retur oe more time to the coi flip example: P (X = x) = θ x (1 θ) 1 x (x =, 1), so l P = x l θ + (1 x) l(1 θ) ad (3.1) θ l P = x θ 1 x 1 θ.
9 3 Christia Remlig To fid the Fisher iformatio, we plug X ito this fuctio ad take the square. This produces X 2 (1 X)2 + θ2 (1 θ) X) 2X(1 = 2 θ(1 θ) X 2 ( 1 θ (1 θ) θ(1 θ) ) 2X ( ) 1 (1 θ) θ(1 θ) (1 θ). 2 Now recall that EX = EX 2 = θ, ad take the expectatio. We fid that I(θ) = θ(1 θ)2 + θ 3 + 2θ 2 (1 θ) 2θ 3 2θ 2 (1 θ) + θ 2 θ 2 (1 θ) 2 1 = θ(1 θ). Alteratively, we could have obtaied the same result more quickly from (3.8). Take oe more derivative i (3.1), plug X ito the resultig fuctio ad take the expectatio: I(θ) = E ( X θ 2 1 X (1 θ) 2 ) = 1 θ θ = 1 θ(1 θ) Example 3.8. Cosider the N(θ, 1) distributio. Its desity is give by f = (2π) 1/2 e (x θ)2 /2, so l f = (x θ) 2 /2 + C. Two differetiatios produce ( 2 / θ 2 ) l f = 1, so I = 1. Whe dealig with a radom sample X 1,..., X, Defiitio 3.3 ca be adapted by replacig f by what we called the likelihood fuctio i the previous sectio. More precisely, we could replace (3.9) with ( ) Var θ l L(X 1,..., X ; θ), where L(x 1,..., x ) = f(x 1 ) f(x ) (cotiuous case) or L(x 1,..., x ) = P (X 1 = x 1,..., X = x ) (discrete case). The, however, we ca use the product structure of L ad idepedece to evaluate (i the cotiuous case, say) Var ( θ l f(x j, θ) ) = ( ) Var θ l f(x j, θ) = I(θ), where ow I is the Fisher iformatio of a idividual radom variable X. A aalogous calculatio works i the discrete case.
10 MLE ad efficiecy 31 Theorem 3.4 (Cramer-Rao). Let T = T (X 1,..., X ) be a statistic ad write k(θ) = ET. The, uder suitable (smoothess) assumptios, Var(T ) (k (θ)) 2 I(θ). Corollary 3.5. If the statistic T i Theorem 3.4 is ubiased, the Var(T ) 1 I(θ). As a illustratio, let s agai look at the coi flip example with its MLE T = θ = X. We saw earlier that Var(T ) = θ(1 θ)/, ad this equals 1/(I) by our calculatio from Example 3.7. Sice T is also ubiased, this meas that this estimator achieves the Cramer-Rao boud from Corollary 3.5. We give a special ame to estimators that are optimal, i this sese: Defiitio 3.6. Let T be a ubiased estimator for θ. efficiet if T achieves the CR boud: Var(T ) = 1 I(θ) We call T So we ca summarize by sayig that X is a efficiet estimator for θ. Let s ow try to derive the CR boud. I ll do this for cotiuous radom variables, with desity f(x, θ). The k(θ) = dx 1 dx 2... dx T (x 1,..., x )f(x 1, θ) f(x, θ) ad thus (at least if we are allowed to freely iterchage differetiatios ad itegrals) k (θ) = dx 1 dx 2... dx T (x 1,..., x ) = = ET Z, f(x 1, θ) f(x j, θ) f(x, θ) θ dx 1 dx 2... dx T (x 1,..., x ) ( ) θ l f(x j, θ) f(x 1, θ) f(x, θ)
11 32 Christia Remlig where we have abbreviated Z = ( / θ) l f(x j, θ). We kow that EZ = (compare (3.6)) ad Var(Z) = I, by idepedece of the X j. We will ow eed the followig very importat ad fudametal tool: Exercise Establish the Cauchy-Schwarz iequality: For ay two radom variables X, Y, we have that EXY ( EX 2) 1/2 ( EY 2 ) 1/2. Suggestio: Cosider the parabola f(t) = E(X + ty ) 2 ad fid its miimum. Exercise Ca you also show that we have equality i the CSI precisely if X = cy or Y = cx for some c R? Exercise Defie the correlatio coefficiet of two radom variables X, Y as E(X EX)(Y EY ) ρ X,Y =. σ X σ Y Deduce from the CSI that 1 ρ 1. Also, show that ρ = if X, Y are idepedet. (The coverse of this statemet is ot true, i geeral.) Sice EZ =, we ca write k (θ) = ET Z = E(T ET )Z = E(T ET )(Z EZ), ad ow the CSI shows that as claimed. k 2 Var(T )Var(Z) = I(θ)Var(T ), Exercise Observe that the iequality was oly itroduced i the very last step. Thus, by Exercise 3.12, we have equality i the CR boud precisely if T ET ad Z are multiples of oe aother. I particular, this must hold for the efficiet statistic T = X from the coi flip example. Cofirm directly that ideed X θ = cz. Example 3.9. We saw i Example 3.4 that the MLE for the expoetial distributio f(x) = e x/θ /θ (x ) is give by T = θ = X ad that T is ubiased. Is T also efficiet? To aswer this, we compute the Fisher iformatio: l f = l θ x/θ, so 2 l f/ θ 2 = 1/θ 2 + 2X/θ 3, ad, takig expectatios, we see that I = 1/θ 2. O the other had, Var(T ) = (1/)Var(X 1 ) ad EX 2 1 = 1 θ x 2 e x/θ dx = θ 2 t 2 e t dt = 2θ 2,
12 MLE ad efficiecy 33 by two itegratios by parts. This implies that Var(X 1 ) = EX1 2 (EX 1 ) 2 = θ 2, ad thus Var(T ) = θ 2 / = 1/(I), ad T is ideed efficiet. Let s ow take aother look at the uiform distributio from Example 3.5. Its desity equals { 1/θ < x < θ f(x, θ) = otherwise ; recall that the MLE is give by θ = max(x 1,..., X ). We kow that T = θ is ot ubiased. Let s try to be more precise here. Sice P (T t) = (t/θ), the statistic T has desity f(t) = t 1 /θ ( < t < θ). It follows that ET = θ t dt = θ + 1 θ. Exercise Show by a similar calculatio that ET 2 = (/(+2))θ 2. I particular, if we itroduce U = + 1 T = + 1 max(x 1,..., X ), the this ew statistic is ubiased (though it is o loger the MLE for θ). By the exercise, so (3.11) Var(U) = ( ) EU 2 = ET 2 = ( ) ( + 1) 2 ( + 2) 1 θ 2 = ( + 1)2 ( + 2) θ2, θ 2 ( + 2). This looks great! I our previous examples, the variace decayed oly at the rate 1/, ad here we ow have that Var(U) 1/ 2. Come to thik of it, is this cosistet with the CR boud? Does t Corollary 3.5 say that Var(T ) 1/ for ay ubiased statistic T? The aswer to this is that the whole theory does t apply here. The desity f(x, θ) is ot cotiuous (let aloe differetiable) as a fuctio of θ; it jumps at θ = x. I fact, the problems ca be pipoited more precisely: (3.6) fails, the itegrad equals 1/θ 2, ad (3.6) was used to deduce that EZ =, so the whole argumet breaks dow. Recall that by our discussio followig (3.6), the itegratio i (3.6) is really oly exteded over < x < θ, so problems with the jump of f are temporarily avoided. (However, I also remarked parethetically that I would like the set {x : f(x, θ) > } to be idepedet of θ, ad this clearly fails here.)
13 34 Christia Remlig Let s compare U with aother ubiased estimator. Let V = 2X. Sice EX = EX 1 = θ/2, this is ideed ubiased. It is a cotiuous aalog of the ubiased estimator that we suggested (ot very seriously, though) i the ur example from Chapter 2; see pg. 1. We have that Var(X) = Var(X 1 )/ ad EX 2 1 = 1 θ θ so Var(X 1 ) = θ 2 (1/3 1/4) = θ 2 /12, thus t 2 dt = θ2 3, Var(V ) = θ2 3. This is markedly iferior to (3.11). We right away had a bad feelig about V (i Chapter 2); this ow receives precise theoretical cofirmatio. Exercise However, if = 1, the Var(V ) = Var(U). Ca you explai this? Exercise Cosider the desity { 2x/θ 2 x θ f(x, θ) = otherwise. (a) Fid the MLE θ. (b) Show that T = 2+1 θ is ubiased. 2 (c) Fid Var(T ). Suggestio: Proceed as i the discussio above. Example 3.1. Let s retur to the MLE T = θ = X for the Poisso distributio; compare Example 3.3. We saw earlier that this is ubiased. Is T also efficiet? To aswer this, we first work out the Fisher iformatio: l P (X = x, θ) = x l θ + θ + l x!, so by takig two derivatives ad the the expectatio, we fid that I(θ) = EX/θ 2 = 1/θ. O the other had, EX 2 1 = k= k 2 θk k! e θ = θ 2 k= θ k k! e θ + EX 1 = θ 2 + θ; the first step follows by writig k 2 = k(k 1) + k. Thus Var(X 1 ) = θ, hece Var(T ) = θ/, ad T is efficiet. Exercise I this problem, you should frequetly refer to results ad calculatios from Example 3.4. Cosider the desity f(x, θ) =
14 MLE ad efficiecy 35 θe θx (x ) ad f(x) = for x <. Recall that T = 1 Y, Y = 1/X is a ubiased estimator for θ. (a) Fid the Fisher iformatio I(θ) for this desity. (b) Compute Var(T ); coclude that T is ot efficiet. (Later we will see that T evertheless has the smallest possible variace amog all ubiased estimators.) Suggestio: Use the desity of Y from (3.4) to work out EY 2, ad the ET 2 ad Var(T ). Avoid the trap of forgettig that the θ of the preset exercise correspods to 1/θ i (3.4). Example Let s ow try to estimate the variace of a N(, σ) distributio. We take θ = σ 2 as the parameter labelig this family of desities. Two ubiased estimators come to mid: T 1 = 1 X 2 j, T 2 = S 2 = 1 1 ( Xj X ) 2 We kow from Example 3.6 that T 1 is the MLE for θ; see (3.5). We start out by computig the Fisher iformatio. We have that l f = (1/2) l θ + X 2 /(2θ) + C, so I(θ) = 1 2θ θ 3 EX2 = 1 2θ 2. Next, idepedece gives that Var(T 1 ) = (1/)Var(X 2 1), ad this latter variace we compute as EX 4 1 (EX 2 1) 2. Exercise Show that EX 4 1 = 3θ 2. Suggestio: Use itegratio by parts i the resultig itegral. Sice EX1 2 = Var(X 1 ) = θ, this shows that Var(X1) 2 = 2θ 2 ad thus Var(T 1 ) = 2θ 2 /. So T 1 is efficiet. As for T 2, we recall that ( 1)S 2 /θ χ 2 ( 1) ad also that this is the distributio of the sum of 1 iid N(, 1)-distributed radom variables. I other words, ( 1)S 2 /θ has the same distributio as Z = 1 Y j 2, with Y j iid ad Y j N(, 1). I particular, the variaces agree, ad Var(Z) = ( 1)Var(Y1 2 ) = 2( 1), by the calculatio we just did. Thus θ 2 Var(S 2 ) = 2( 1) ( 1) = 2θ2 2 1, ad this estimator is ot efficiet (it comes very close though).
15 36 Christia Remlig If we had used istead of the slightly uexpected 1 i the deomiator of the formula defiig S 2, the resultig estimator Y 3 = S2 has variace 1 2( 1)θ2 (3.12) Var(Y 3 ) = = I(θ). This, of course, does ot cotradict the CR boud from Corollary 3.5: this estimator is ot ubiased. O the cotrary, everythig is i perfect order, we oly eed to refer to Theorem 3.4, which hadles this situatio. Sice k(θ) = EY 3 = ( 1)θ/, we have that k 2 = (( 1)/) 2, ad the variace from (3.12) is i fact slightly larger (by a factor of /( 1)) tha the lower boud provided by the theorem. Exercise 3.2. Cosider a radom sample draw from a N(θ, 1) distributio. Show that (the MLE) X is a efficiet estimator for θ.
Let us give one more example of MLE. Example 3. The uniform distribution U[0, θ] on the interval [0, θ] has p.d.f.
Lecture 5 Let us give oe more example of MLE. Example 3. The uiform distributio U[0, ] o the iterval [0, ] has p.d.f. { 1 f(x =, 0 x, 0, otherwise The likelihood fuctio ϕ( = f(x i = 1 I(X 1,..., X [0,
More informationUnbiased Estimation. February 7-12, 2008
Ubiased Estimatio February 7-2, 2008 We begi with a sample X = (X,..., X ) of radom variables chose accordig to oe of a family of probabilities P θ where θ is elemet from the parameter space Θ. For radom
More information6.3 Testing Series With Positive Terms
6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial
More informationConvergence of random variables. (telegram style notes) P.J.C. Spreij
Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space
More informationDirection: This test is worth 250 points. You are required to complete this test within 50 minutes.
Term Test October 3, 003 Name Math 56 Studet Number Directio: This test is worth 50 poits. You are required to complete this test withi 50 miutes. I order to receive full credit, aswer each problem completely
More information7.1 Convergence of sequences of random variables
Chapter 7 Limit Theorems Throughout this sectio we will assume a probability space (, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite
More informationReview Questions, Chapters 8, 9. f(y) = 0, elsewhere. F (y) = f Y(1) = n ( e y/θ) n 1 1 θ e y/θ = n θ e yn
Stat 366 Lab 2 Solutios (September 2, 2006) page TA: Yury Petracheko, CAB 484, yuryp@ualberta.ca, http://www.ualberta.ca/ yuryp/ Review Questios, Chapters 8, 9 8.5 Suppose that Y, Y 2,..., Y deote a radom
More informationStat 421-SP2012 Interval Estimation Section
Stat 41-SP01 Iterval Estimatio Sectio 11.1-11. We ow uderstad (Chapter 10) how to fid poit estimators of a ukow parameter. o However, a poit estimate does ot provide ay iformatio about the ucertaity (possible
More informationProblem Set 4 Due Oct, 12
EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios
More informationChapter 6 Principles of Data Reduction
Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a
More informationRandom Variables, Sampling and Estimation
Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig
More informationProblems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman:
Math 224 Fall 2017 Homework 4 Drew Armstrog Problems from 9th editio of Probability ad Statistical Iferece by Hogg, Tais ad Zimmerma: Sectio 2.3, Exercises 16(a,d),18. Sectio 2.4, Exercises 13, 14. Sectio
More informationEECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1
EECS564 Estimatio, Filterig, ad Detectio Hwk 2 Sols. Witer 25 4. Let Z be a sigle observatio havig desity fuctio where. p (z) = (2z + ), z (a) Assumig that is a oradom parameter, fid ad plot the maximum
More informationThe Growth of Functions. Theoretical Supplement
The Growth of Fuctios Theoretical Supplemet The Triagle Iequality The triagle iequality is a algebraic tool that is ofte useful i maipulatig absolute values of fuctios. The triagle iequality says that
More informationEconomics 241B Relation to Method of Moments and Maximum Likelihood OLSE as a Maximum Likelihood Estimator
Ecoomics 24B Relatio to Method of Momets ad Maximum Likelihood OLSE as a Maximum Likelihood Estimator Uder Assumptio 5 we have speci ed the distributio of the error, so we ca estimate the model parameters
More informationEstimation for Complete Data
Estimatio for Complete Data complete data: there is o loss of iformatio durig study. complete idividual complete data= grouped data A complete idividual data is the oe i which the complete iformatio of
More informationMATH 472 / SPRING 2013 ASSIGNMENT 2: DUE FEBRUARY 4 FINALIZED
MATH 47 / SPRING 013 ASSIGNMENT : DUE FEBRUARY 4 FINALIZED Please iclude a cover sheet that provides a complete setece aswer to each the followig three questios: (a) I your opiio, what were the mai ideas
More informationThis exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.
Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the
More informationLecture 12: September 27
36-705: Itermediate Statistics Fall 207 Lecturer: Siva Balakrisha Lecture 2: September 27 Today we will discuss sufficiecy i more detail ad the begi to discuss some geeral strategies for costructig estimators.
More informationChapter 3. Strong convergence. 3.1 Definition of almost sure convergence
Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i
More informationFall 2013 MTH431/531 Real analysis Section Notes
Fall 013 MTH431/531 Real aalysis Sectio 8.1-8. Notes Yi Su 013.11.1 1. Defiitio of uiform covergece. We look at a sequece of fuctios f (x) ad study the coverget property. Notice we have two parameters
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013 Large Deviatios for i.i.d. Radom Variables Cotet. Cheroff boud usig expoetial momet geeratig fuctios. Properties of a momet
More information1 Approximating Integrals using Taylor Polynomials
Seughee Ye Ma 8: Week 7 Nov Week 7 Summary This week, we will lear how we ca approximate itegrals usig Taylor series ad umerical methods. Topics Page Approximatig Itegrals usig Taylor Polyomials. Defiitios................................................
More informationSolutions: Homework 3
Solutios: Homework 3 Suppose that the radom variables Y,...,Y satisfy Y i = x i + " i : i =,..., IID where x,...,x R are fixed values ad ",...," Normal(0, )with R + kow. Fid ˆ = MLE( ). IND Solutio: Observe
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS
MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak
More informationResampling Methods. X (1/2), i.e., Pr (X i m) = 1/2. We order the data: X (1) X (2) X (n). Define the sample median: ( n.
Jauary 1, 2019 Resamplig Methods Motivatio We have so may estimators with the property θ θ d N 0, σ 2 We ca also write θ a N θ, σ 2 /, where a meas approximately distributed as Oce we have a cosistet estimator
More informationECE 901 Lecture 14: Maximum Likelihood Estimation and Complexity Regularization
ECE 90 Lecture 4: Maximum Likelihood Estimatio ad Complexity Regularizatio R Nowak 5/7/009 Review : Maximum Likelihood Estimatio We have iid observatios draw from a ukow distributio Y i iid p θ, i,, where
More information4. Partial Sums and the Central Limit Theorem
1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems
More informationLecture 11 and 12: Basic estimation theory
Lecture ad 2: Basic estimatio theory Sprig 202 - EE 94 Networked estimatio ad cotrol Prof. Kha March 2 202 I. MAXIMUM-LIKELIHOOD ESTIMATORS The maximum likelihood priciple is deceptively simple. Louis
More informationSeunghee Ye Ma 8: Week 5 Oct 28
Week 5 Summary I Sectio, we go over the Mea Value Theorem ad its applicatios. I Sectio 2, we will recap what we have covered so far this term. Topics Page Mea Value Theorem. Applicatios of the Mea Value
More informationStatistical Theory MT 2008 Problems 1: Solution sketches
Statistical Theory MT 008 Problems : Solutio sketches. Which of the followig desities are withi a expoetial family? Explai your reasoig. a) Let 0 < θ < ad put fx, θ) = θ)θ x ; x = 0,,,... b) c) where α
More information6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer.
6 Itegers Modulo I Example 2.3(e), we have defied the cogruece of two itegers a,b with respect to a modulus. Let us recall that a b (mod ) meas a b. We have proved that cogruece is a equivalece relatio
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 5
CS434a/54a: Patter Recogitio Prof. Olga Veksler Lecture 5 Today Itroductio to parameter estimatio Two methods for parameter estimatio Maimum Likelihood Estimatio Bayesia Estimatio Itroducto Bayesia Decisio
More informationLecture 2: Monte Carlo Simulation
STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?
More informationStatistical Theory MT 2009 Problems 1: Solution sketches
Statistical Theory MT 009 Problems : Solutio sketches. Which of the followig desities are withi a expoetial family? Explai your reasoig. (a) Let 0 < θ < ad put f(x, θ) = ( θ)θ x ; x = 0,,,... (b) (c) where
More informationACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics
ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER 1 018/019 DR. ANTHONY BROWN 8. Statistics 8.1. Measures of Cetre: Mea, Media ad Mode. If we have a series of umbers the
More informationMaximum Likelihood Estimation
ECE 645: Estimatio Theory Sprig 2015 Istructor: Prof. Staley H. Cha Maximum Likelihood Estimatio (LaTeX prepared by Shaobo Fag) April 14, 2015 This lecture ote is based o ECE 645(Sprig 2015) by Prof. Staley
More informationDiscrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 19
CS 70 Discrete Mathematics ad Probability Theory Sprig 2016 Rao ad Walrad Note 19 Some Importat Distributios Recall our basic probabilistic experimet of tossig a biased coi times. This is a very simple
More information7.1 Convergence of sequences of random variables
Chapter 7 Limit theorems Throughout this sectio we will assume a probability space (Ω, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite
More information6. Sufficient, Complete, and Ancillary Statistics
Sufficiet, Complete ad Acillary Statistics http://www.math.uah.edu/stat/poit/sufficiet.xhtml 1 of 7 7/16/2009 6:13 AM Virtual Laboratories > 7. Poit Estimatio > 1 2 3 4 5 6 6. Sufficiet, Complete, ad Acillary
More informationLecture 3: August 31
36-705: Itermediate Statistics Fall 018 Lecturer: Siva Balakrisha Lecture 3: August 31 This lecture will be mostly a summary of other useful expoetial tail bouds We will ot prove ay of these i lecture,
More informationUnderstanding Samples
1 Will Moroe CS 109 Samplig ad Bootstrappig Lecture Notes #17 August 2, 2017 Based o a hadout by Chris Piech I this chapter we are goig to talk about statistics calculated o samples from a populatio. We
More informationDiscrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22
CS 70 Discrete Mathematics for CS Sprig 2007 Luca Trevisa Lecture 22 Aother Importat Distributio The Geometric Distributio Questio: A biased coi with Heads probability p is tossed repeatedly util the first
More informationMATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4
MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.
More informationJanuary 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS
Jauary 25, 207 INTRODUCTION TO MATHEMATICAL STATISTICS Abstract. A basic itroductio to statistics assumig kowledge of probability theory.. Probability I a typical udergraduate problem i probability, we
More informationMath 10A final exam, December 16, 2016
Please put away all books, calculators, cell phoes ad other devices. You may cosult a sigle two-sided sheet of otes. Please write carefully ad clearly, USING WORDS (ot just symbols). Remember that the
More informationDistribution of Random Samples & Limit theorems
STAT/MATH 395 A - PROBABILITY II UW Witer Quarter 2017 Néhémy Lim Distributio of Radom Samples & Limit theorems 1 Distributio of i.i.d. Samples Motivatig example. Assume that the goal of a study is to
More informationSince X n /n P p, we know that X n (n. Xn (n X n ) Using the asymptotic result above to obtain an approximation for fixed n, we obtain
Assigmet 9 Exercise 5.5 Let X biomial, p, where p 0, 1 is ukow. Obtai cofidece itervals for p i two differet ways: a Sice X / p d N0, p1 p], the variace of the limitig distributio depeds oly o p. Use the
More informationOutput Analysis and Run-Length Control
IEOR E4703: Mote Carlo Simulatio Columbia Uiversity c 2017 by Marti Haugh Output Aalysis ad Ru-Legth Cotrol I these otes we describe how the Cetral Limit Theorem ca be used to costruct approximate (1 α%
More informationProblem Set 2 Solutions
CS271 Radomess & Computatio, Sprig 2018 Problem Set 2 Solutios Poit totals are i the margi; the maximum total umber of poits was 52. 1. Probabilistic method for domiatig sets 6pts Pick a radom subset S
More informationLecture 01: the Central Limit Theorem. 1 Central Limit Theorem for i.i.d. random variables
CSCI-B609: A Theorist s Toolkit, Fall 06 Aug 3 Lecture 0: the Cetral Limit Theorem Lecturer: Yua Zhou Scribe: Yua Xie & Yua Zhou Cetral Limit Theorem for iid radom variables Let us say that we wat to aalyze
More informationMath 113, Calculus II Winter 2007 Final Exam Solutions
Math, Calculus II Witer 7 Fial Exam Solutios (5 poits) Use the limit defiitio of the defiite itegral ad the sum formulas to compute x x + dx The check your aswer usig the Evaluatio Theorem Solutio: I this
More informationCHAPTER I: Vector Spaces
CHAPTER I: Vector Spaces Sectio 1: Itroductio ad Examples This first chapter is largely a review of topics you probably saw i your liear algebra course. So why cover it? (1) Not everyoe remembers everythig
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationDiscrete Mathematics for CS Spring 2008 David Wagner Note 22
CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 22 I.I.D. Radom Variables Estimatig the bias of a coi Questio: We wat to estimate the proportio p of Democrats i the US populatio, by takig
More informationExponential Families and Bayesian Inference
Computer Visio Expoetial Families ad Bayesia Iferece Lecture Expoetial Families A expoetial family of distributios is a d-parameter family f(x; havig the followig form: f(x; = h(xe g(t T (x B(, (. where
More informationStatistics 511 Additional Materials
Cofidece Itervals o mu Statistics 511 Additioal Materials This topic officially moves us from probability to statistics. We begi to discuss makig ifereces about the populatio. Oe way to differetiate probability
More informationSTAT Homework 2 - Solutions
STAT-36700 Homework - Solutios Fall 08 September 4, 08 This cotais solutios for Homework. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better isight.
More informationPart I: Covers Sequence through Series Comparison Tests
Part I: Covers Sequece through Series Compariso Tests. Give a example of each of the followig: (a) A geometric sequece: (b) A alteratig sequece: (c) A sequece that is bouded, but ot coverget: (d) A sequece
More informationThe standard deviation of the mean
Physics 6C Fall 20 The stadard deviatio of the mea These otes provide some clarificatio o the distictio betwee the stadard deviatio ad the stadard deviatio of the mea.. The sample mea ad variace Cosider
More informationLecture 6 Ecient estimators. Rao-Cramer bound.
Lecture 6 Eciet estimators. Rao-Cramer boud. 1 MSE ad Suciecy Let X (X 1,..., X) be a radom sample from distributio f θ. Let θ ˆ δ(x) be a estimator of θ. Let T (X) be a suciet statistic for θ. As we have
More informationMath 2784 (or 2794W) University of Connecticut
ORDERS OF GROWTH PAT SMITH Math 2784 (or 2794W) Uiversity of Coecticut Date: Mar. 2, 22. ORDERS OF GROWTH. Itroductio Gaiig a ituitive feel for the relative growth of fuctios is importat if you really
More informationn outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n,
CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 9 Variace Questio: At each time step, I flip a fair coi. If it comes up Heads, I walk oe step to the right; if it comes up Tails, I walk oe
More information1 Introduction to reducing variance in Monte Carlo simulations
Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by
More informationLecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting
Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would
More informationSTAT Homework 1 - Solutions
STAT-36700 Homework 1 - Solutios Fall 018 September 11, 018 This cotais solutios for Homework 1. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better
More information32 estimating the cumulative distribution function
32 estimatig the cumulative distributio fuctio 4.6 types of cofidece itervals/bads Let F be a class of distributio fuctios F ad let θ be some quatity of iterest, such as the mea of F or the whole fuctio
More informationDiscrete Mathematics and Probability Theory Summer 2014 James Cook Note 15
CS 70 Discrete Mathematics ad Probability Theory Summer 2014 James Cook Note 15 Some Importat Distributios I this ote we will itroduce three importat probability distributios that are widely used to model
More informationLecture 6: Integration and the Mean Value Theorem. slope =
Math 8 Istructor: Padraic Bartlett Lecture 6: Itegratio ad the Mea Value Theorem Week 6 Caltech 202 The Mea Value Theorem The Mea Value Theorem abbreviated MVT is the followig result: Theorem. Suppose
More informationTopic 9: Sampling Distributions of Estimators
Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be
More informationLECTURE 14 NOTES. A sequence of α-level tests {ϕ n (x)} is consistent if
LECTURE 14 NOTES 1. Asymptotic power of tests. Defiitio 1.1. A sequece of -level tests {ϕ x)} is cosistet if β θ) := E θ [ ϕ x) ] 1 as, for ay θ Θ 1. Just like cosistecy of a sequece of estimators, Defiitio
More informationREAL ANALYSIS II: PROBLEM SET 1 - SOLUTIONS
REAL ANALYSIS II: PROBLEM SET 1 - SOLUTIONS 18th Feb, 016 Defiitio (Lipschitz fuctio). A fuctio f : R R is said to be Lipschitz if there exists a positive real umber c such that for ay x, y i the domai
More informationIIT JAM Mathematical Statistics (MS) 2006 SECTION A
IIT JAM Mathematical Statistics (MS) 6 SECTION A. If a > for ad lim a / L >, the which of the followig series is ot coverget? (a) (b) (c) (d) (d) = = a = a = a a + / a lim a a / + = lim a / a / + = lim
More information3. Z Transform. Recall that the Fourier transform (FT) of a DT signal xn [ ] is ( ) [ ] = In order for the FT to exist in the finite magnitude sense,
3. Z Trasform Referece: Etire Chapter 3 of text. Recall that the Fourier trasform (FT) of a DT sigal x [ ] is ω ( ) [ ] X e = j jω k = xe I order for the FT to exist i the fiite magitude sese, S = x [
More informationMathematical Statistics - MS
Paper Specific Istructios. The examiatio is of hours duratio. There are a total of 60 questios carryig 00 marks. The etire paper is divided ito three sectios, A, B ad C. All sectios are compulsory. Questios
More information5. Likelihood Ratio Tests
1 of 5 7/29/2009 3:16 PM Virtual Laboratories > 9. Hy pothesis Testig > 1 2 3 4 5 6 7 5. Likelihood Ratio Tests Prelimiaries As usual, our startig poit is a radom experimet with a uderlyig sample space,
More informationThe Riemann Zeta Function
Physics 6A Witer 6 The Riema Zeta Fuctio I this ote, I will sketch some of the mai properties of the Riema zeta fuctio, ζ(x). For x >, we defie ζ(x) =, x >. () x = For x, this sum diverges. However, we
More informationEcon 325 Notes on Point Estimator and Confidence Interval 1 By Hiro Kasahara
Poit Estimator Eco 325 Notes o Poit Estimator ad Cofidece Iterval 1 By Hiro Kasahara Parameter, Estimator, ad Estimate The ormal probability desity fuctio is fully characterized by two costats: populatio
More informationChapter 6 Infinite Series
Chapter 6 Ifiite Series I the previous chapter we cosidered itegrals which were improper i the sese that the iterval of itegratio was ubouded. I this chapter we are goig to discuss a topic which is somewhat
More information10-701/ Machine Learning Mid-term Exam Solution
0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it
More informationGoodness-of-Fit Tests and Categorical Data Analysis (Devore Chapter Fourteen)
Goodess-of-Fit Tests ad Categorical Data Aalysis (Devore Chapter Fourtee) MATH-252-01: Probability ad Statistics II Sprig 2019 Cotets 1 Chi-Squared Tests with Kow Probabilities 1 1.1 Chi-Squared Testig................
More informationPRACTICE PROBLEMS FOR THE FINAL
PRACTICE PROBLEMS FOR THE FINAL Math 36Q Fall 25 Professor Hoh Below is a list of practice questios for the Fial Exam. I would suggest also goig over the practice problems ad exams for Exam ad Exam 2 to
More informationLecture 10 October Minimaxity and least favorable prior sequences
STATS 300A: Theory of Statistics Fall 205 Lecture 0 October 22 Lecturer: Lester Mackey Scribe: Brya He, Rahul Makhijai Warig: These otes may cotai factual ad/or typographic errors. 0. Miimaxity ad least
More informationSequences and Series of Functions
Chapter 6 Sequeces ad Series of Fuctios 6.1. Covergece of a Sequece of Fuctios Poitwise Covergece. Defiitio 6.1. Let, for each N, fuctio f : A R be defied. If, for each x A, the sequece (f (x)) coverges
More informationLecture 2: April 3, 2013
TTIC/CMSC 350 Mathematical Toolkit Sprig 203 Madhur Tulsiai Lecture 2: April 3, 203 Scribe: Shubhedu Trivedi Coi tosses cotiued We retur to the coi tossig example from the last lecture agai: Example. Give,
More informationStat410 Probability and Statistics II (F16)
Some Basic Cocepts of Statistical Iferece (Sec 5.) Suppose we have a rv X that has a pdf/pmf deoted by f(x; θ) or p(x; θ), where θ is called the parameter. I previous lectures, we focus o probability problems
More informationChapter 10: Power Series
Chapter : Power Series 57 Chapter Overview: Power Series The reaso series are part of a Calculus course is that there are fuctios which caot be itegrated. All power series, though, ca be itegrated because
More informationLecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting
Lecture 6 Chi Square Distributio (χ ) ad Least Squares Fittig Chi Square Distributio (χ ) Suppose: We have a set of measuremets {x 1, x, x }. We kow the true value of each x i (x t1, x t, x t ). We would
More informationThe picture in figure 1.1 helps us to see that the area represents the distance traveled. Figure 1: Area represents distance travelled
1 Lecture : Area Area ad distace traveled Approximatig area by rectagles Summatio The area uder a parabola 1.1 Area ad distace Suppose we have the followig iformatio about the velocity of a particle, how
More informationDirection: This test is worth 150 points. You are required to complete this test within 55 minutes.
Term Test 3 (Part A) November 1, 004 Name Math 6 Studet Number Directio: This test is worth 10 poits. You are required to complete this test withi miutes. I order to receive full credit, aswer each problem
More informationDiscrete Mathematics and Probability Theory Spring 2012 Alistair Sinclair Note 15
CS 70 Discrete Mathematics ad Probability Theory Sprig 2012 Alistair Siclair Note 15 Some Importat Distributios The first importat distributio we leared about i the last Lecture Note is the biomial distributio
More informationElement sampling: Part 2
Chapter 4 Elemet samplig: Part 2 4.1 Itroductio We ow cosider uequal probability samplig desigs which is very popular i practice. I the uequal probability samplig, we ca improve the efficiecy of the resultig
More informationMath 113 Exam 3 Practice
Math Exam Practice Exam will cover.-.9. This sheet has three sectios. The first sectio will remid you about techiques ad formulas that you should kow. The secod gives a umber of practice questios for you
More informationDS 100: Principles and Techniques of Data Science Date: April 13, Discussion #10
DS 00: Priciples ad Techiques of Data Sciece Date: April 3, 208 Name: Hypothesis Testig Discussio #0. Defie these terms below as they relate to hypothesis testig. a) Data Geeratio Model: Solutio: A set
More informationEcon 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.
Eco 325/327 Notes o Sample Mea, Sample Proportio, Cetral Limit Theorem, Chi-square Distributio, Studet s t distributio 1 Sample Mea By Hiro Kasahara We cosider a radom sample from a populatio. Defiitio
More informationAn Introduction to Randomized Algorithms
A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis
More informationFrequentist Inference
Frequetist Iferece The topics of the ext three sectios are useful applicatios of the Cetral Limit Theorem. Without kowig aythig about the uderlyig distributio of a sequece of radom variables {X i }, for
More informationMathematical Induction
Mathematical Iductio Itroductio Mathematical iductio, or just iductio, is a proof techique. Suppose that for every atural umber, P() is a statemet. We wish to show that all statemets P() are true. I a
More informationSOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker
SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 9. POINT ESTIMATION 9. Covergece i Probability. The bases of poit estimatio have already bee laid out i previous chapters. I chapter 5
More information