Stat 643 Problems. a) Show that V is a sub 5-algebra of T. b) Let \ be an integrable random variable. Find EÐ\lVÑÞ

Size: px
Start display at page:

Download "Stat 643 Problems. a) Show that V is a sub 5-algebra of T. b) Let \ be an integrable random variable. Find EÐ\lVÑÞ"

Transcription

1 Stat 643 Problems 1. Let ( HT, ) be a measurable space. Suppose that., T and Uare 5-finite positive measures on this space with T Uand U...T.T.U a) Show that T. and œ a.s......u.....u.u Š.. b) Show that if. U, then œ a.s (Cressie) Let H œò -- ß Ó for some -, T be the set of Borel sets and T be Lebesgue measure divided by 2 -. For a subset of H, define the symmetric set for E as EœÖ == l E and let V œöe T Eœ E Þ a) Show that V is a sub 5-algebra of T. b) Let \ be an integrable random variable. Find EÐ\lVÞ 3. Let H œòßó, T be the set of Borel sets and Tœ?. for? a distribution placing a unit point mass at the point Ð ß and. 2-dimensional Lebesgue measure. Consider the variable \Ð= œ = and the sub 5-algebra of T generated by \, V. a) For E T, find E ÐMElVœTÐElV. b) For ]Ð= œ =, find E Ð]lV. 4. Prove Result Prove the fact stated in the paragraph above Result 5 in the typed notes. 6. Show directly that the following families of distributions have countable equivalent subsets: a) c, for T) uniform on the integers ) through œöþþþß ß ßßßß$ßÞÞÞ. b) c, for T ) uniform on the 2-dimensional disk of radius ) centered at the œðß_. c) The Poisson Ð) œðß_. 7. Suppose that \œð\ ß\ßÞÞÞß\ has independent components, where each \ 3 is generated as follows. For independent random variables [ 3 µ normal Ð. ß and ^ 3 µ Poisson Ð., \ 3 œ[ 3 with probability : and \ 3 œ^ 3 with probability :. Suppose that. Òß_. Use the factorization theorem and find low dimensional sufficient statistics in the cases that: a) : is known to be, and b) : ÒßÓ is unknown. (In the first case the parameter space œö Òß_, while in the second it is ÒßÓ Òß_.). (Ferguson) Consider the probability distributions on Ðe ßU defined as follows. For ) œð) œ e Ðß and \µt, suppose ) -1-

2 TÒ\ œbó œ ) œ B ) Ð :: for Bœ )) ß ß) ßÞÞÞ otherwise. Let \ß\ßÞÞÞß\ be iid according to TÞ ) a) Argue that the family ÖT ) is not dominated by a 5-finte measure, so that the factorization theorem can not be applied to identify a sufficient statistic here. b) Argue from first principles that the statistic XÐ\ œð min \ß\is 3 3 sufficient for the parameter ). c) Argue that the factorization theorem can be applied if ) is known (the first factor of the parameter is replaced by a single point) and identify a sufficient statistic for this case. d) Argue that if : is known, min \ 3 is a sufficient statistic. 9. Suppose that \ w is exponential with mean - (i.e. has density 0ÐB - œ -exp Ð -BMÒB Ó with respect to Lebesgue measure on e, but that one w w only observes \œ\mò\ Ó. (There is interval censoring below Bœ.) a) Consider the measure. on k œö Ðß_ consisting of a point mass of 1 at 0 plus Lebesgue measure on (1,_). Give a formula for the R-N derivative of \ T - wrt. on k. \ b) Suppose that \ß\ßÞÞÞß\ are iid with the marginal distribution T-. Find a 2-dimensional sufficient statistic for this problem and argue that it is indeed sufficient. c) Argue carefully that your statistic from b) is minimal sufficient. 10. Suppose that \ß\ßÞÞÞß\ are iid with marginal density with respect to Lebesgue measure on Ðß of the form B 0ÐB ) œœ ) Ð ) B Ðß B Òß Þ Find a low dimensional sufficient statistic and prove that it is minimal. 11. (Lehmann TPE page 65) Let \ ß\ßÞÞÞß\ 7 and ]ß]ßÞÞÞß] be independent normal random variables with E\œ 0ß Var \œ 5, E ] œ ( and Var ] œ 7. Find minimal sufficient statistics for the following situations: a) no restrictions on the parameters, b) 5 œ 7, and c) 0 œ (. 12. (Lehmann TPE page 65) Let 0 be a positive integrable function over Ðß_ and let :ÐB ) œ-ð) 0ÐBMÒ B ) ÓÞ If \ ß\ßÞÞÞß\ are iid with the marginal density :ÐB ), show that max \ 3 is sufficient for ). 13. Schervish Exercise 2.. And show that ^ is ancillary. -2-

3 14. Schervish Exercise (I think you need to assume that ÖB spans e.) 15. Schervish Exercise Schervish Exercise Schervish Exercise What is a first order ancillary statistic here? 1. Schervish Exercise Suppose that f is a convex subset of Òß_ 5. Argue that there exists a decision problem that has f as its set of risk vectors. (Consider problems where \ is degenerate and carries no information.) 20. Consider the two state decision problem œöß, T the Bernoulli ˆ % distribution and T the Bernoulli ˆ distribution, T and PÐ) ß+ œmò) Á+ÓÞ a) Find the set of risk vectors for the four nonrandomized decision rules. Plot these in the plane. Sketch f, the risk set for this problem. b) For this problem, show explicitly that any element of W has a corresponding element of W with identical risk vector and vice versa. c) Identify the the set of all admissible risk vectors for this problem. Is there a minimal complete class for this decision problem? If there is one, what is it? d) For each : ÒßÓ, identify those risk vectors that are Bayes versus the prior 1 œð:ß :. For which priors are there more than one Bayes rule? e) Verify directly that the prescription choose an action that minimizes the posterior expected loss produces a Bayes rule versus the prior 1 œð ß Þ 21. Consider a two state decision problem œöß, where the observable \œð\ ß\ has iid Bernoulli ˆ % coordinates if ) œ and iid Bernoulli ˆ coordinates if ) œ. Suppose that T and PÐ) ß+ œmò) Á+Ó. Consider the behavioral decision rule 9 B defined by 9 B ÐÖ œ if B œ, and 9BÐÖ œ and 9BÐÖ œ if B œ. a) Show that 9 B is inadmissible by finding a rule with a better risk function. (It may be helpful to figure out what the risk set is for this problem, in a manner similar to what you did in problem 20.) b) Use the construction from Result 21 and find a behavioral decision rule that is a function of the sufficient statistic \ \ and is risk equivalent to 9 B Þ (Note that this rule is inadmissible.) 22. Consider the squared error loss estimation of : Ðß, based on \µ binomial B Ðß:, and the two nonrandomized decision rules $ ÐB œ and $ ÐB œ ˆ B Þ Let -3-

4 < be a randomized decision function that chooses $ with probability and $ with probability Þ a) Write out expressions for the risk functions of $, $ and <. b) Find a behavioral rule that is risk equivalent to <. c) Identify a nonrandomized estimator that is strictly better than < or Suppose œ and that a decision rule 9 is such that for each ) ß9is admissible when the parameter space Ö). Show that 9 is then admissible when the parameter space 24. Suppose that AÐ) a) Þ Show that 9 is admissible with loss function PÐ) ß+ iff it is admissible with loss function AÐ) PÐ) ß Let T and T be two distributions on k and 0 and 0 be densities of these with respect to some dominating 5 -finite measure.. Consider the parametric family of distributions with parameter : ÒßÓ and densities wrt. of the form 0ÐB œð :0 ÐB :0ÐB ß : and suppose that \ has density 0 :. a) If KÐÖ œkðð1} œ, find the squared error loss formal Bayes estimator of : versus the prior KÞ b) Suppose now that K is the uniform distribution on ÒßÓ. Find the squared error loss Bayes estimator of : versus the prior KÞ c) Argue that your estimator from b) is admissible. 26. For the model of problem 25, suppose that T is the binomial Ðß % distribution and 3 T1 is the binomial Ðß % distribution. Find an ancillary statistic. (Hint: consider the probabilities that \ takes the values and.) Is the family of problem 25 then necessarily boundedly complete? 27. (Ferguson) Prove or give a counterexample: If V and V are complete classes of decision rules, then V V is essentially complete. 2. Problem 9, page 209 of Schervish. 29. (Ferguson and others) Suppose that \µ binomial Ðß: and one wishes to estimate : Ðß. Suppose first that PÐ:ß+ œ: Ð : Ð: + Þ \ a) Show that is Bayes versus the uniform prior on Ðß. \ b) Argue that is admissible in this decision problem. Now consider ordinary squared error loss, PÐ:ß+ œð: +. \ c) Apply the result of problem 24 and prove that is admissible under this loss function as well. -4-

5 30. Consider a two state decision problem œ T œöß, T0 and T have respective densities with respect to a dominating 5 -finite measure., 0 and 0 and the loss function is PÐ) ß+. a) For K an arbitrary prior distribution, find a formal Bayes rule versus KÞ b) Specialize your result from a) to the case where PÐ) ß+ œmò) Á+Ó. What connection does the form of these rules have to the theory of simple versus simple hypothesis testing? 31. (Ferguson and others) Suppose that one observes independent binomial random variables, \µ binomial Ðß: and ] µ binomial ( ß: and needs to estimate : : with squared error loss, PÐ: ß+ œð: : +. (We will assume that the parameter space is Ðß and that the action space is Ð ß.) Find the form of a rule Bayes against a prior uniform over the parameter space. 32. Consider a linear loss two action decision problem, = e, T œöß and PÐ) ß+ œð) ) MÒ+ œómò ) ) Ó Ð) ) MÒ) œómò) ) Ó for some ) of interest. Find the form of a Bayes rule for this problem. (Hint: Consider the difference PÐ) ß PÐ) ßÞ 33. Problem 24, page 211 Schervish. 34. Problem 34, page 142 Schervish. 35. Problem 36, page 142 Schervish. 36. Problem 2, page 339 Schervish. 37. Problem 3, page 339 Schervish. 3. Problem 4, page 339 Schervish. 39. Problem 5, page 339 Schervish. 40. Problem 7, page 339 Schervish. 41. Consider the family of bivariate observations \œð]ß^ on k œðß with densities wrt two-dimensional Lebesgue measure 0ÐB ( º exp Ð( C ( D for a bivariate parameter vector ( œð( ß(. a) Find the natural parameter space for this family of distributions. b) What is the normalizing constant OÐ( for this family? Use it to find E( ]Þ c) Find a UMVUE of the quantity E( ] E ( ^ based on iid vectors \ß\ßÞÞÞß\ with marginal (bivariate) distribution specified by 0ÐB (. Argue carefully that your estimator is UMVUE. -5-

6 42. Suppose that \µ Bernoulli Ð: and that one wishes to estimate : with loss $ PÐ:ß+ œl: +l. Consider the estimator $ with $ Ð œ % and $ Ð œ %. a) Write out the risk function for $ and show that VÐ:ß$ Ÿ % Þ b) Show that there is a prior distribution placing all its mass on Öß ß against which $ is Bayes. c) Prove that $ is minimax in this problem and identify a least favorable prior. 43. Consider a decision problem where T) is the Normal Ð) ß distribution on k= e, T œöß and PÐ) ß+ œmò+ œómò) &Ó MÒ+ œómò) Ÿ&Ó. a) œð _ß&Ó Ò'ß_ guess what prior distribution is least favorable, find the corresponding Bayes decision rule and prove that it is minimax. b) œ e, guess what decision rule might be minimax, find its risk function and prove that it is minimax. 44. Consider the scenario of problem 29. Show that the estimator there is minimax for the weighted squared error loss and identify a least favorable prior. 45. (Ferguson and others) Consider (inverse mean) weighted squared error loss estimation of -, the mean of a Poisson distribution. That is, let A œðß_ß T œ AßT - be the Poisson distribution on k œößßß$ßþþþ and PÐ-ß+ œ - Ð- + ÞLet $Ð\ œ\þ a) Show that $ is an equalizer rule. b) Show that $ is generalized Bayes versus Lebesgue measure on A. c) Find the Bayes estimators wrt the > Ð, priors on A. d) Prove that $ is minimax for this problem. 46. (Berger page 35) Suppose that \ is Poisson with mean - and that one wishes to test H À - œ vs H a À - œ with loss, i.e. A œ T œöß and PÐ-ß+ œmò- Á+Ó. a) Find the form of the nonrandomized Bayes tests for this problem. b) Using your answer to part a), find enough risk points for nonrandomized Bayes rules to sketch the part of the lower boundary of the risk set relevant to finding a minimax rule. c) On the basis of your sketch from part b), find a minimax rule for this problem and identify a least favorable distribution. 47. Consider the left-truncated Poisson distribution with pmf B 0ÐB ) œ expð )) ÎBxÐ exp Ð ) for Bœßß$ßÞÞÞ. Suppose that \ß\ßÞÞÞß\ are iid variables with this distribution. a) Show that Xœ \ is a complete sufficient statistic for Ðß_Þ 3œ 3 ) b) It turns out that the pmf of X is given by 1Ð> ) œw> x) > Î>xÐexp Ð) for >œß ß ßÞÞÞwhere W is the so called Stirling number of the second > -6-

7 kind (whose exact definition is not important here). Find the UMVUE of ). (It will involve the W > 's.) 4. Consider an observable \ taking values in k and with distribution T) for Þ For a statistic X, let U œ UÐX and T) be the restriction of T) to U. Prove that if X is complete and sufficient and 9 ÐX is unbiased for ) Ð, then another random variable Y is an unbiased estimator of ) Ð iff E ÐYlXœ 9 ÐX a.s. T a). 49. Suppose that under the usual set-up X and W are two real-valued statistics. a) Suppose that E ) ÐXÐ\ _ and E ) ÐWÐ\ _a). Show that if XÐ\ and WÐ\ are both unbiased estimators of ) and XÐ\ and WÐ\ XÐ\ are uncorrelated a), then Var) XÐ\ Ÿ Var) WÐ\Þ b) Suppose that WÐ\ is an unbiased estimator of ), E ) ÐWÐ\ _a) and that Vœ E ) ÒWlXÓ doesn't depend upon ). Argue that V is preferable to W under squared error loss. 50. Suppose that \ is a real-valued random variable with density (wrt the 5-finite measure.) defined by 0ÐB ( œ exp Ð+Ð( ( B2ÐB for a positive function 2Ð ab e and ( Ð-ß. œ > e, the natural parameter space. a) Find ( Ð œ E( \Þ b) For a fixed ( >, let QÐ= ( œ Eexp ( Ð=\Þ Find QÐ= (, determine for which values of = it is finite, and show that it depends on ( only through ( Ð. 51. Problem 14, page 340 Schervish. 52. Problem 19, page 341 Schervish. Also, find the MRE estimator of ) under squared error loss for this model. 53. Show that the C-R inequality is not changed by a smooth reparameterization. That is, suppose that c œöt ) is dominated by a 5 -finite measure. and satisfies i) 0ÐB ) for all ) and B,. ii) for all B,.) 0ÐB ) exists and is finite everywhere e and iii) for any statistic $ with E ) l$ Ð\l _ for all ), E ) $ Ð\ can be differentiated under the integral sign at all points w Let h be a function to e such that 2 is continuous and nonvanishing Let ( œ2ð) and define U( œt). Show that the information inequality bound obtained from ÖU ( evaluated at ( œ2ð) is the same as the bound obtained from c. 54. Let RœÐR ßRßÞÞÞßR be multinomial Ðß:ß:ßÞÞÞß: where : œþ a) Find the form of the Fisher information matrix based on the parameter :. b) Suppose that :Ð 3 ) for 3œßßÞÞÞß5are differentiable functions of a real parameter and open interval, where each :Ð) and :Ð) œþ Suppose that 2ÐC ßCßÞÞÞßC 5 ) 3 3 is a continuous real-valued function with continuous -7-

8 first partial derivatives and define ;Ð) œ2ð: Ð) ß:Ð ) ßÞÞÞß:Ð 5 ). Show that the information bound for unbiased estimators of ;Ð) in this context is w Ð; Ð). where MÐ ) œ :Ð 3 ) Œ log:ð 3 ) Þ M Ð).) 5 w 55. As an application of Lemma 61 (or 61 ) consider the following. Suppose that ) ) w \µ U Ð) ß) and consider unbiased estimators of ) Ð ß) œ. For ) ) ) 0 w Ð) w ß ÐB 0Ð ß w ÐB ) ) ) 0 ÐB 0 ÐB and ) ) ), apply Lemma 61 with 1ÐB œ and 1ÐB œ Þ 3œ Ð) ß) Ð) ß) What does Lemma 61 then say about the variance of an unbiased estimator of ) ( ß)? w w (If you can do it, sup over values of ) and ). I've not tried this, so am not sure how it comes out.) 56. Problem 11, page 39 Schervish. 57. Take the simplest possible -dimensional statistical estimation problem, \ß\ßÞÞÞß\ iid Bernoulli Ð:, and make your way through the results on maximum likelihood, carefully seeing and saying what they assume and what they yield. (Find D Ð:, 6ÐB ß:, verify hypotheses of the theorems and see what they say about : estimation of :Þ For sake of argument, let X œ & \ 3 and have a look at the 3œ corresponding :. What do the theorems say about the behavior of \ when :œ or? What are approximate confidence intervals for :? Etc.) 5. Suppose that \ß\ßÞÞÞ are iid, each taking values in k œößß with R-N derivative of wrt to counting measure on k T ) 0ÐB œ ) & expðb) exp) expð ) 3 3œ a) Find an estimator of ) based on œ MÒ\ œó that is È consistent (i.e. for which ÈÐs ) ) converges in distribution). b) Find in more or less explicit form a one step Newton modification of your estimator from a). c) Prove directly that your estimator from b) is asymptotically normal with variance ÎM Ð). (With ) ^ the estimator from a) and ) the estimator from b), w ) ) œs PÐs ), PÐ ww s) and write PÐs œpð Ðs ) ) ) ) P Ð ) Ðs) ) P Ð) for some ) s w w ww www between ) and ).) d) Show that provided B Ðß the loglikelihood has a maximizer. --

9 B $B 'B )s œ log È Þ Ð B Prove that an estimator defined to be )s when B Ðß will be asymptotically normal with variance ÎM Ð). e) Show that the observed information and expected information approximations lead to the same large sample confidence intervals for ). What do these look like based on, say,? )s By the way, a version of nearly everything in this problem works in any one parameter exponential family. See the Theory II question on last year's prelim. 59. Prove the following filling-in lemma: Lemma Suppose that 1 and 1 are two distinct, positive probability densities defined on an interval in e Þ If the ratio 1Î1 is nondecreasing in a real-valued function XÐB, then the family of densities Ö1 ÒßÓ for 1 œ 1 Ð 1 has the MLR property in XÐB. 60. Consider the situation of problem 9 and maximum likelihood estimation of -. a) Show that with Qœ Ò\ 3's equal to Ó, in the event that Qœthere is no MLE of -, but that in all other cases there is a maximizer of the likelihood. Then argue that for any -, with T probability tending to 1, the MLE of -, say - ^ -, exists. b) Give a simple estimator of - based on Q alone. Prove that this estimator is consistent for -. Then write down an explicit one-step Newton modification of your estimator from a). c) Discuss what numerical methods you could use to find the MLE from a) in the event that it exists. d) Give two forms of large sample confidence intervals for - based on the MLE - ^ and two different approximations to MÐ-Þ 61. Consider the two distributions T and T on k œðß with densities wrt Lebesgue measure 0ÐB œ and 0ÐB œ$b. a) Find a most powerful level œþ test of H À\µT vs H À\µT. b) Plot both the - loss risk set f and the set i œö 9 Ð) ß) Ð for the simple versus testing problem involving 0 and 0. What test is Bayes versus a uniform prior? This test is best of what size? c) Take the result of Problem 59 as given. Consider the family of mixture distributions c œöt ) where for ) ÒßÓ, T) œð ) T ) T. In this family find a UMP level œþ test of the hypothesis H À ) ŸÞ& vs H À ) Þ& based on a single observation \. Argue carefully that your test is really UMP. -9-

10 62. Consider a composite versus composite testing problem in a family of distributions c œöt ) dominated by the 5 -finite measure., and specifically a Bayesian decisiontheoretic approach to this problem with prior distribution K under - loss. a) Show that if KÐ@ œ then 9 ÐB» is Bayes, while if KÐ@ 1œ then 9ÐB»0is Bayes. b) Show that if KÐ@ and KÐ@ 1 then the Bayes test against K has Neyman-Pearson form for densities 1 and 1 on defined by 1ÐB œ ' k 0ÐB.KÐ) ' 0ÐB.KÐ) 1 and 1ÐB 1 œ. KÐ@ ) c) Suppose that \ is Normal (),1). Find Bayes tests of H À ) œ vs H À ) Á i) supposing that K is the Normal Ðß5 distribution, and ii) supposing that K is ÐR?, for R the Normal Ðß5 distribution, and? a point mass distribution at Suppose that \ is Poisson with mean ). Find the UMP test of size œþ& of H À ) Ÿ% or ) 10 vsh À% ). The following table of Poisson probabilities TÒ\ ) œbó will be useful in this enterprise. B $ % & ' ( ) * ) œ% Þ)$ Þ($$ Þ%'& Þ*&% Þ*&% Þ&'$ Þ% Þ&*& Þ*) Þ$ Þ&$ Þ* Þ' ) œ Þ Þ& Þ$ Þ(' Þ)* Þ$() Þ'$ Þ* Þ' Þ& Þ& Þ$( Þ*%) B $ % & ' ( ) * $ % & ) œ% Þ Þ Þ ) œ Þ(* Þ& Þ$%( Þ( Þ) Þ( Þ$( Þ* Þ* Þ% Þ Þ Þ 64. Suppose that \ is exponential with mean - Þ Set up the two equations that will have to be solved simultaneously in order to find UMP size test of H À - ŸÞ& or - 2 vs H À - ÐÞ&ß. 65. Suppose that for 3œßßÞÞÞ the random vectors \ 3 œð] 3 ß^ 3 are iid with distribution T : described as follows. ] and ^ are independent, ] µ binomial Ðß: D and ^ µ geometric Ð: (i.e. with pmf 1ÐD : œ:ð : for DœßßÞÞÞ). a) Give the C-R lower bound on the variance of an unbiased estimator of : based on \ ß \ ßÞÞÞß\ Þ b) Find the maximum likelihood estimator of : based on \ ß \ ßÞÞÞß \ and discuss its asymptotic properties. c) Give both the expected information and observed information large sample confidence intervals for : based on your estimator from b). 66. Consider again the scenario of problem 20. There you sketched the risk set f. Here sketch the corresponding set i

11 67. Consider the estimation problem with iid T) observations, where T) is the exponential distribution with mean ). Let Z be the group of scale transformations on k œðß_ ß Z œö1-l- where 1ÐB - œ-bþ a) Show that the estimation problem with loss PÐ) ß+ œðlog Ð+Î) is invariant under Z and say what relationship any equivariant nonrandomized decision rule must satisfy. b) Show that the estimation problem with loss PÐ) ß+ œð) + Î) is invariant under Z, and say what relationship any equivariant nonrandomized estimator must satisfy. c) Find the generalized Bayes estimator of ) in situation b) if the prior has density wrt Lebesgue measure 1Ð) œ ) MÒ) Ó. Argue that this estimator is the best equivariant estimator in the situation of b). 6. (Problem 37, page 122 TSH.) Let \ßÞÞÞß\ and ]ßÞÞÞß] 7 be independent samples from N Ðß 0 and N Ð( ß, and consider the hypotheses H À ( Ÿ 0 and H À ( 0Þ Show that there exists a UMP test of size, and it rejects H when ] \ is too large. (If 0 ( is a particular alternative, the distribution assigning probability 1 to the point with ( œ 0 œð70 ( ÎÐ7 is least favorable at size.) 69. Problem 57 Schervish, page Problem 12, parts a)-c) Schervish, page Fix ÐßÞ& and - Ð ß. œö ÒßÓ and consider the discrete distributions with probability mass functions as below. B ) œ ) Á )- ˆ - ˆ ˆ - ˆ - ˆ a ) b- Find the size likelihood ratio test of H À ) œ vs H À ) Á. Show that the test 9 ÐB œmòb œó is of size and is strictly more powerful that the LRT whatever be ). (This simple example shows that LRTs need not necessarily be in any sense optimal.) 72. Consider the situation of problem 5. a) Find the form of UMP tests of H À ) Ÿ ) vs H À ) ) Þ Discuss how you would go about choosing an appropriate constant to produce a test of approximately size. Discuss how you would go about finding an optimal (approximately) 90% confidence set for ). b) Beginning with a third order Taylor expansion of the loglikelihood at ) about the MLE, prove directly that the asymptotic ; distribution holds under H for the likelihood ratio statistic for testing H À ) œ ) vs H À ) Á) Þ c) Suppose that œ and œ(ß œ' and œ(. Plot the corresponding loglikelihood and give the value of the MLE. Based on the MLE and its normal limiting distribution, then give an approximate 90% two-sided confidence interval -11-

12 for ). Finally, based on the asymptotic ; distribution of the likelihood ratio statistic, give a different approximate 90% confidence interval for ). 73. Consider the situation of Problems 9 and 60. Below are some data artificially generated from an exponential distribution..24, 3.20,.14, 1.6,.5, 1.15,.32,.66, 1.60,.34,.61,.09, 1.1, 1.29,.23,.5,.11, 3.2, 2.53,. a) Plot the loglikelihood function for the uncensored data (the B w values given above). Give approximate 90% two-sided confidence intervals for - based on the asymptotic ; distribution for the LRT statistic for testing H À - œ - vs H À - Á - and based on the asymptotic normal distribution of the MLE. Now consider the censored data problem where any value less than 1 is reported as 0. Modify the above data accordingly and do the following. b) Plot the loglikelihood for the censored data (the derived B) values. How does this function of - compare to the one from part a)? It might be informative to plot these on the same set of axes. c) It turns out (you might derive this fact) that expð - MÐ -œ Ð Þ Œ ˆ - exp - - expð - Give two different approximate 90% confidence intervals for - based on the asymptotic distribution of the MLE here. Then give an approximate 90% interval based on inverting the LRTs of H À - œ - vs H À - Á- Þ 74Þ Suppose that \ß\ß\ $ and \% are independent binomial random variables, \ 3 µ binomial Ðß: 3. Consider the problem of testing H À: œ: and : $ œ: % against the alternative that H does not hold. a) Find the form of the LRT of these hypotheses and show that the log of the LRT statistic is the sum of the logs of independent LRT statistics for H À: œ: and H À: $ œ: % (a fact that might be useful in directly showing the ; limit of the LRT statistic under the null hypothesis). b) Find the form of the Wald tests and show directly that the test statistic is asymptotically ; under the null hypothesis. c) Find the form of the ; tests of these hypotheses and again show directly that the test statistic is asymptotically ; under the null hypothesis. 75. If \ß\ßÞÞÞß\ are iid normal Ð.5 ß, find the form of the LRT of H À. œ. vs H À. Á. Þ Show directly that the LRT statistic is asymptotically ; with 1 degree of freedom. Invert the LRTs to produce a large sample 90% confidence procedure for

Stat 643 Problems, Fall 2000

Stat 643 Problems, Fall 2000 Stat 643 Problems, Fall 2000 Assignment 1 1. Let ( HT, ) be a measurable space. Suppose that., T and Uare 5-finite positive measures on this space ith T Uand U...T.T.U a) Sho that T. and œ a.s......u.....u.u

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 5, 2011 Lecture 13: Basic elements and notions in decision theory Basic elements X : a sample from a population P P Decision: an action

More information

QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, MAY 9, Examiners: Drs. K. M. Ramachandran and G. S. Ladde

QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, MAY 9, Examiners: Drs. K. M. Ramachandran and G. S. Ladde QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, MAY 9, 2015 Examiners: Drs. K. M. Ramachandran and G. S. Ladde INSTRUCTIONS: a. The quality is more important than the quantity. b. Attempt

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano, 02LEu1 ttd ~Lt~S Testing Statistical Hypotheses Third Edition With 6 Illustrations ~Springer 2 The Probability Background 28 2.1 Probability and Measure 28 2.2 Integration.........

More information

DEPARTMENT OF MATHEMATICS AND STATISTICS QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, SEPTEMBER 24, 2016

DEPARTMENT OF MATHEMATICS AND STATISTICS QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, SEPTEMBER 24, 2016 DEPARTMENT OF MATHEMATICS AND STATISTICS QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, SEPTEMBER 24, 2016 Examiners: Drs. K. M. Ramachandran and G. S. Ladde INSTRUCTIONS: a. The quality

More information

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple

More information

Stat 643 Results. Roughly ßXÐ\Ñ is sufficient for ÖT ) (or for )) provided the conditional distribution of \ given XÐ\Ñ is the same for each

Stat 643 Results. Roughly ßXÐ\Ñ is sufficient for ÖT ) (or for )) provided the conditional distribution of \ given XÐ\Ñ is the same for each Stat 643 Results Sufficiency (and Ancillarity and Completeness Roughly ßXÐ\Ñ is sufficient for ÖT @ (or for provided the conditional distribution of \ given XÐ\Ñ is the same for each @ Example Let \œð\

More information

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various

More information

LECTURE NOTES 57. Lecture 9

LECTURE NOTES 57. Lecture 9 LECTURE NOTES 57 Lecture 9 17. Hypothesis testing A special type of decision problem is hypothesis testing. We partition the parameter space into H [ A with H \ A = ;. Wewrite H 2 H A 2 A. A decision problem

More information

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

Lecture 12 November 3

Lecture 12 November 3 STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Stat 643 Review of Probability Results (Cressie)

Stat 643 Review of Probability Results (Cressie) Stat 643 Review of Probability Results (Cressie) Probability Space: ( HTT,, ) H is the set of outcomes T is a 5-algebra; subsets of H T is a probability measure mapping from T onto [0,] Measurable Space:

More information

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Theory of testing hypotheses X: a sample from a population P in P, a family of populations. Based on the observed X, we test a

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses. Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

PART I. Multiple choice. 1. Find the slope of the line shown here. 2. Find the slope of the line with equation $ÐB CÑœ(B &.

PART I. Multiple choice. 1. Find the slope of the line shown here. 2. Find the slope of the line with equation $ÐB CÑœ(B &. Math 1301 - College Algebra Final Exam Review Sheet Version X This review, while fairly comprehensive, should not be the only material used to study for the final exam. It should not be considered a preview

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

DA Freedman Notes on the MLE Fall 2003

DA Freedman Notes on the MLE Fall 2003 DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar

More information

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1.

(a) (3 points) Construct a 95% confidence interval for β 2 in Equation 1. Problem 1 (21 points) An economist runs the regression y i = β 0 + x 1i β 1 + x 2i β 2 + x 3i β 3 + ε i (1) The results are summarized in the following table: Equation 1. Variable Coefficient Std. Error

More information

Notes on Decision Theory and Prediction

Notes on Decision Theory and Prediction Notes on Decision Theory and Prediction Ronald Christensen Professor of Statistics Department of Mathematics and Statistics University of New Mexico October 7, 2014 1. Decision Theory Decision theory is

More information

HYPOTHESIS TESTING: FREQUENTIST APPROACH.

HYPOTHESIS TESTING: FREQUENTIST APPROACH. HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous

More information

STATISTICS SYLLABUS UNIT I

STATISTICS SYLLABUS UNIT I STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution

More information

SIMULATION - PROBLEM SET 1

SIMULATION - PROBLEM SET 1 SIMULATION - PROBLEM SET 1 " if! Ÿ B Ÿ 1. The random variable X has probability density function 0ÐBÑ œ " $ if Ÿ B Ÿ.! otherwise Using the inverse transform method of simulation, find the random observation

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Stat 5102 Final Exam May 14, 2015

Stat 5102 Final Exam May 14, 2015 Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions

More information

Statistical machine learning and kernel methods. Primary references: John Shawe-Taylor and Nello Cristianini, Kernel Methods for Pattern Analysis

Statistical machine learning and kernel methods. Primary references: John Shawe-Taylor and Nello Cristianini, Kernel Methods for Pattern Analysis Part 5 (MA 751) Statistical machine learning and kernel methods Primary references: John Shawe-Taylor and Nello Cristianini, Kernel Methods for Pattern Analysis Christopher Burges, A tutorial on support

More information

8œ! This theorem is justified by repeating the process developed for a Taylor polynomial an infinite number of times.

8œ! This theorem is justified by repeating the process developed for a Taylor polynomial an infinite number of times. Taylor and Maclaurin Series We can use the same process we used to find a Taylor or Maclaurin polynomial to find a power series for a particular function as long as the function has infinitely many derivatives.

More information

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Review Quiz 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Cramér Rao lower bound (CRLB). That is, if where { } and are scalars, then

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Exam C Solutions Spring 2005

Exam C Solutions Spring 2005 Exam C Solutions Spring 005 Question # The CDF is F( x) = 4 ( + x) Observation (x) F(x) compare to: Maximum difference 0. 0.58 0, 0. 0.58 0.7 0.880 0., 0.4 0.680 0.9 0.93 0.4, 0.6 0.53. 0.949 0.6, 0.8

More information

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t )

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t ) LECURE NOES 21 Lecture 4 7. Sufficient statistics Consider the usual statistical setup: the data is X and the paramter is. o gain information about the parameter we study various functions of the data

More information

Some General Types of Tests

Some General Types of Tests Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

1. (Regular) Exponential Family

1. (Regular) Exponential Family 1. (Regular) Exponential Family The density function of a regular exponential family is: [ ] Example. Poisson(θ) [ ] Example. Normal. (both unknown). ) [ ] [ ] [ ] [ ] 2. Theorem (Exponential family &

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

It's Only Fitting. Fitting model to data parameterizing model estimating unknown parameters in the model

It's Only Fitting. Fitting model to data parameterizing model estimating unknown parameters in the model It's Only Fitting Fitting model to data parameterizing model estimating unknown parameters in the model Likelihood: an example Cohort of 8! individuals observe survivors at times >œ 1, 2, 3,..., : 8",

More information

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing

STAT 135 Lab 5 Bootstrapping and Hypothesis Testing STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 19, 2011 Lecture 17: UMVUE and the first method of derivation Estimable parameters Let ϑ be a parameter in the family P. If there exists

More information

Lecture notes on statistical decision theory Econ 2110, fall 2013

Lecture notes on statistical decision theory Econ 2110, fall 2013 Lecture notes on statistical decision theory Econ 2110, fall 2013 Maximilian Kasy March 10, 2014 These lecture notes are roughly based on Robert, C. (2007). The Bayesian choice: from decision-theoretic

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

CHAPTER I INTRODUCTION

CHAPTER I INTRODUCTION CHAPTER I INTRODUCTION Preliminaries. Perturbation theory is the study of the effects of small disturbances in the mathematical model of a physical system. The model could be expressed as an algebraic

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

MAP METHODS FOR MACHINE LEARNING. Mark A. Kon, Boston University Leszek Plaskota, Warsaw University, Warsaw Andrzej Przybyszewski, McGill University

MAP METHODS FOR MACHINE LEARNING. Mark A. Kon, Boston University Leszek Plaskota, Warsaw University, Warsaw Andrzej Przybyszewski, McGill University MAP METHODS FOR MACHINE LEARNING Mark A. Kon, Boston University Leszek Plaskota, Warsaw University, Warsaw Andrzej Przybyszewski, McGill University Example: Control System 1. Paradigm: Machine learning

More information

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory Statistical Inference Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory IP, José Bioucas Dias, IST, 2007

More information

Review and continuation from last week Properties of MLEs

Review and continuation from last week Properties of MLEs Review and continuation from last week Properties of MLEs As we have mentioned, MLEs have a nice intuitive property, and as we have seen, they have a certain equivariance property. We will see later that

More information

Here are proofs for some of the results about diagonalization that were presented without proof in class.

Here are proofs for some of the results about diagonalization that were presented without proof in class. Suppose E is an 8 8 matrix. In what follows, terms like eigenvectors, eigenvalues, and eigenspaces all refer to the matrix E. Here are proofs for some of the results about diagonalization that were presented

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Unobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior:

Unobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior: Pi Priors Unobservable Parameter population proportion, p prior: π ( p) Conjugate prior π ( p) ~ Beta( a, b) same PDF family exponential family only Posterior π ( p y) ~ Beta( a + y, b + n y) Observed

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

ECE531 Lecture 4b: Composite Hypothesis Testing

ECE531 Lecture 4b: Composite Hypothesis Testing ECE531 Lecture 4b: Composite Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute 16-February-2011 Worcester Polytechnic Institute D. Richard Brown III 16-February-2011 1 / 44 Introduction

More information

Section 1.3 Functions and Their Graphs 19

Section 1.3 Functions and Their Graphs 19 23. 0 1 2 24. 0 1 2 y 0 1 0 y 1 0 0 Section 1.3 Functions and Their Graphs 19 3, Ÿ 1, 0 25. y œ 26. y œ œ 2, 1 œ, 0 Ÿ " 27. (a) Line through a!ß! band a"ß " b: y œ Line through a"ß " band aß! b: y œ 2,

More information

Statistics Ph.D. Qualifying Exam

Statistics Ph.D. Qualifying Exam Department of Statistics Carnegie Mellon University May 7 2008 Statistics Ph.D. Qualifying Exam You are not expected to solve all five problems. Complete solutions to few problems will be preferred to

More information

Nuisance parameters and their treatment

Nuisance parameters and their treatment BS2 Statistical Inference, Lecture 2, Hilary Term 2008 April 2, 2008 Ancillarity Inference principles Completeness A statistic A = a(x ) is said to be ancillary if (i) The distribution of A does not depend

More information

Notes on the Unique Extension Theorem. Recall that we were interested in defining a general measure of a size of a set on Ò!ß "Ó.

Notes on the Unique Extension Theorem. Recall that we were interested in defining a general measure of a size of a set on Ò!ß Ó. 1. More on measures: Notes on the Unique Extension Theorem Recall that we were interested in defining a general measure of a size of a set on Ò!ß "Ó. Defined this measure T. Defined TÐ ( +ß, ) Ñ œ, +Þ

More information

TUTORIAL 8 SOLUTIONS #

TUTORIAL 8 SOLUTIONS # TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level

More information

A Course in Statistical Theory

A Course in Statistical Theory A Course in Statistical Theory David J. Olive Southern Illinois University Department of Mathematics Mailcode 4408 Carbondale, IL 62901-4408 dolive@.siu.edu January 2008, notes September 29, 2013 Contents

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

EECS564 Estimation, Filtering, and Detection Exam 2 Week of April 20, 2015

EECS564 Estimation, Filtering, and Detection Exam 2 Week of April 20, 2015 EECS564 Estimation, Filtering, and Detection Exam Week of April 0, 015 This is an open book takehome exam. You have 48 hours to complete the exam. All work on the exam should be your own. problems have

More information

Theory of Probability Sir Harold Jeffreys Table of Contents

Theory of Probability Sir Harold Jeffreys Table of Contents Theory of Probability Sir Harold Jeffreys Table of Contents I. Fundamental Notions 1.0 [Induction and its relation to deduction] 1 1.1 [Principles of inductive reasoning] 8 1.2 [Axioms for conditional

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

exclusive prepublication prepublication discount 25 FREE reprints Order now!

exclusive prepublication prepublication discount 25 FREE reprints Order now! Dear Contributor Please take advantage of the exclusive prepublication offer to all Dekker authors: Order your article reprints now to receive a special prepublication discount and FREE reprints when you

More information

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation

More information

Eco517 Fall 2004 C. Sims MIDTERM EXAM

Eco517 Fall 2004 C. Sims MIDTERM EXAM Eco517 Fall 2004 C. Sims MIDTERM EXAM Answer all four questions. Each is worth 23 points. Do not devote disproportionate time to any one question unless you have answered all the others. (1) We are considering

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

1. Classify each number. Choose all correct answers. b. È # : (i) natural number (ii) integer (iii) rational number (iv) real number

1. Classify each number. Choose all correct answers. b. È # : (i) natural number (ii) integer (iii) rational number (iv) real number Review for Placement Test To ypass Math 1301 College Algebra Department of Computer and Mathematical Sciences University of Houston-Downtown Revised: Fall 2009 PLEASE READ THE FOLLOWING CAREFULLY: 1. The

More information

NOVEMBER 2005 EXAM NOVEMBER 2005 CAS COURSE 3 EXAM SOLUTIONS 1. Maximum likelihood estimation: The log of the density is 68 0ÐBÑ œ 68 ) Ð) "Ñ 68 B. The loglikelihood function is & j œ 68 0ÐB Ñ œ & 68 )

More information

Part 1.) We know that the probability of any specific x only given p ij = p i p j is just multinomial(n, p) where p k1 k 2

Part 1.) We know that the probability of any specific x only given p ij = p i p j is just multinomial(n, p) where p k1 k 2 Problem.) I will break this into two parts: () Proving w (m) = p( x (m) X i = x i, X j = x j, p ij = p i p j ). In other words, the probability of a specific table in T x given the row and column counts

More information

Statistical Methods in HYDROLOGY CHARLES T. HAAN. The Iowa State University Press / Ames

Statistical Methods in HYDROLOGY CHARLES T. HAAN. The Iowa State University Press / Ames Statistical Methods in HYDROLOGY CHARLES T. HAAN The Iowa State University Press / Ames Univariate BASIC Table of Contents PREFACE xiii ACKNOWLEDGEMENTS xv 1 INTRODUCTION 1 2 PROBABILITY AND PROBABILITY

More information

Primer on statistics:

Primer on statistics: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood

More information

Uniformly Most Powerful Bayesian Tests and Standards for Statistical Evidence

Uniformly Most Powerful Bayesian Tests and Standards for Statistical Evidence Uniformly Most Powerful Bayesian Tests and Standards for Statistical Evidence Valen E. Johnson Texas A&M University February 27, 2014 Valen E. Johnson Texas A&M University Uniformly most powerful Bayes

More information

Chapter 8 Hypothesis Testing

Chapter 8 Hypothesis Testing Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two

More information

Chapter 4. Theory of Tests. 4.1 Introduction

Chapter 4. Theory of Tests. 4.1 Introduction Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule

More information

Probability and Statistics qualifying exam, May 2015

Probability and Statistics qualifying exam, May 2015 Probability and Statistics qualifying exam, May 2015 Name: Instructions: 1. The exam is divided into 3 sections: Linear Models, Mathematical Statistics and Probability. You must pass each section to pass

More information

(DMSTT 01) M.Sc. DEGREE EXAMINATION, DECEMBER First Year Statistics Paper I PROBABILITY AND DISTRIBUTION THEORY. Answer any FIVE questions.

(DMSTT 01) M.Sc. DEGREE EXAMINATION, DECEMBER First Year Statistics Paper I PROBABILITY AND DISTRIBUTION THEORY. Answer any FIVE questions. (DMSTT 01) M.Sc. DEGREE EXAMINATION, DECEMBER 2011. First Year Statistics Paper I PROBABILITY AND DISTRIBUTION THEORY Time : Three hours Maximum : 100 marks Answer any FIVE questions. All questions carry

More information

Lecture 7 October 13

Lecture 7 October 13 STATS 300A: Theory of Statistics Fall 2015 Lecture 7 October 13 Lecturer: Lester Mackey Scribe: Jing Miao and Xiuyuan Lu 7.1 Recap So far, we have investigated various criteria for optimal inference. We

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

Chapter 3 : Likelihood function and inference

Chapter 3 : Likelihood function and inference Chapter 3 : Likelihood function and inference 4 Likelihood function and inference The likelihood Information and curvature Sufficiency and ancilarity Maximum likelihood estimation Non-regular models EM

More information

557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES

557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES 557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES Example Suppose that X,..., X n N, ). To test H 0 : 0 H : the most powerful test at level α is based on the statistic λx) f π) X x ) n/ exp

More information

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics Table of Preface page xi PART I INTRODUCTION 1 1 The meaning of probability 3 1.1 Classical definition of probability 3 1.2 Statistical definition of probability 9 1.3 Bayesian understanding of probability

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

8 Testing of Hypotheses and Confidence Regions

8 Testing of Hypotheses and Confidence Regions 8 Testing of Hypotheses and Confidence Regions There are some problems we meet in statistical practice in which estimation of a parameter is not the primary goal; rather, we wish to use our data to decide

More information

Example Let VœÖÐBßCÑ À b-, CœB - ( see the example above). Explain why

Example Let VœÖÐBßCÑ À b-, CœB - ( see the example above). Explain why Definition If V is a relation from E to F, then a) the domain of V œ dom ÐVÑ œ Ö+ E À b, F such that Ð+ß,Ñ V b) the range of V œ ran( VÑ œ Ö, F À b+ E such that Ð+ß,Ñ V " c) the inverse relation of V œ

More information

Lecture 16: Modern Classification (I) - Separating Hyperplanes

Lecture 16: Modern Classification (I) - Separating Hyperplanes Lecture 16: Modern Classification (I) - Separating Hyperplanes Outline 1 2 Separating Hyperplane Binary SVM for Separable Case Bayes Rule for Binary Problems Consider the simplest case: two classes are

More information

MAY 2005 SOA EXAM C/CAS 4 SOLUTIONS

MAY 2005 SOA EXAM C/CAS 4 SOLUTIONS MAY 2005 SOA EXAM C/CAS 4 SOLUTIONS Prepared by Sam Broverman, to appear in ACTEX Exam C/4 Study Guide http://wwwsambrovermancom 2brove@rogerscom sam@utstattorontoedu 1 The distribution function is JÐBÑ

More information

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit Statistics Lent Term 2015 Prof. Mark Thomson Lecture 2 : The Gaussian Limit Prof. M.A. Thomson Lent Term 2015 29 Lecture Lecture Lecture Lecture 1: Back to basics Introduction, Probability distribution

More information

BTRY 4090: Spring 2009 Theory of Statistics

BTRY 4090: Spring 2009 Theory of Statistics BTRY 4090: Spring 2009 Theory of Statistics Guozhang Wang September 25, 2010 1 Review of Probability We begin with a real example of using probability to solve computationally intensive (or infeasible)

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information