Stat 643 Problems, Fall 2000

Size: px
Start display at page:

Download "Stat 643 Problems, Fall 2000"

Transcription

1 Stat 643 Problems, Fall 2000 Assignment 1 1. Let ( HT, ) be a measurable space. Suppose that., T and Uare 5-finite positive measures on this space ith T Uand U...T.T.U a) Sho that T. and œ a.s......u.....u.u Š.. b) Sho that if. U, then œ a.s (Cressie) Let H œò -- ß Ó for some -, T be the set of Borel sets and T be Lebesgue measure divided by 2 -. For a subset of H, define the symmetric set for E as EœÖ == l E and let V œöe T Eœ E Þ a) Sho that V is a sub 5-algebra of T. b) Let be an integrable random variable. Find EÐlVÑÞ 3. Let H œòßó, T be the set of Borel sets and Tœ?. for? a distribution placing a unit point mass at the point Ð ßÑ and. 2-dimensional Lebesgue measure. Consider the variable Ð= Ñœ = and the sub 5-algebra of T generated by, V. a) For E T, find E ÐMElVÑœTÐElVÑ. b) For ]Ð= Ñœ =, find E Ð]lVÑ. 4. Suppose that œð ßßÞÞÞßÑ has independent components, here each 3 is generated as follos. For independent random variables [ 3 µ normal Ð. ßÑ and ^ 3 µ Poisson Ð. Ñ, 3 œ[ 3 ith probability : and 3 œ^ 3 ith probability :. Suppose that. Òß_Ñ. Use the factorization theorem and find lo dimensional sufficient statistics in the cases that: a) : is knon to be, and b) : ÒßÓ is unknon. (In the first case the parameter space œö Òß_Ñ, hile in the second it is ÒßÓ Òß_Ñ.) 5. (Ferguson) Consider the probability distributions on Ðe ßU Ñ defined as follos. For ) œð) œ e ÐßÑ and µt, suppose TÒ œbó œ ) œ ) B ) Ð :Ñ: for Bœ )) ß ß) ßÞÞÞ otherise. Let ßßÞÞÞß be iid according to TÞ ) a) Argue that the family ÖT ) is not dominated by a 5-finte measure, so that the factorization theorem can not be applied to identify a sufficient statistic here. b) Argue from first principles that the statistic XÐÑ œð min ßÑis sufficient for the parameter )

2 c) Argue that the factorization theorem can be applied if ) is knon (the first factor of the parameter is replaced by a single point) and identify a sufficient statistic for this case. d) Argue that if : is knon, min 3 is a sufficient statistic. 6. Suppose that is exponential ith mean - (i.e. has density 0ÐBÑ - œ -exp Ð -BÑMÒB Ó ith respect to Lebesgue measure on e, but that one only observes œmò Ó. (There is interval censoring belo Bœ.) a) Consider the measure. on k œö Ðß_Ñ consisting of a point mass of 1 at 0 plus Lebesgue measure on (1,_). Give a formula for the R-N derivative of T - rt. on k. b) Suppose that ßßÞÞÞß are iid ith the marginal distribution T-. Find a 2-dimensional sufficient statistic for this problem and argue that it is indeed sufficient. c) Argue carefully that your statistic from b) is minimal sufficient. 7. As a complement to Problem 6, consider this random censoring model. Suppose that is as in Problem 6. Suppose further that independent of, ^ is exponential ith mean. We ill suppose that ith [œ minaß^b one observes œ amò[ œóß[ b (one sees either or a random censoring time less than ). The family of distributions of is absolutely continuous ith respect to the product of counting measure and Lebesgue measure on k œöß e (call this dominating measure.). a) Find an R-N derivative of T - rt. on k. b) Supose that ßßÞÞÞß are iid ith the marginal distribution T-. Find a minimal sufficient statistic here. (Actually carefully proving minimalsufficiency looks like it ould be hard. Just make a good guess and say hat needs to be done to finish the proof.). Schervish Exercise 2.. And sho that ^ is ancillary. 9. Schervish Exercise (I think you need to assume that ÖB spans e.) 10. Schervish Exercise Schervish Exercise What is a first order ancillary statistic here? 12. Schervish Exercise

3 Assignment Schervish Exercise 2.34 (consider iid observations). 14. Schervish Exercise Schervish Exercise Schervish Exercise Schervish Exercise Prove the folloing version of Theorem 24 (that Vardeman gave only a discrete case argument for) À Suppose that œðxßwñ and the family c of distributions of on k œ g f is dominated by the measure. œ / (a product of 5-finite measures on g and f.t respectively). With Ð>ß=Ñ œ0ð>ß=ñ, let. ). ) 1Ð>Ñ œ 0Ð>ß=Ñ. Ð=Ñ 0Ð=l>Ñ œ 0Ð>ß=Ñ ) ) ( ) and ) 1Ð>Ñ f Suppose that c is FI regular at ) and that 1Ð>Ñ œ ' 0Ð>ß=Ñ. Ð=Ñ a>. Then ) f ) M Ð) Ñ MX Ð) Ñ ) 19. Let T and T be to distributions on k and 0 and 0 be densities of these ith respect to some dominating 5 -finite measure.. Consider the parametric family of distributions ith parameter ) ÒßÓ and densities rt. of the form 0ÐBÑ ) œð ) Ñ0 ÐBÑ ) 0ÐBÑ ß and suppose that has density 0 ). a) If KÐÖ Ñ œkðö Ñ œ, find the posterior distribution of ) l. b) Suppose no that K is the uniform distribution on ÒßÓ. Find the posterior distribution of )l. i) What is the mean of this posterior distribution (one Bayesian ay of inventing a point estimator for :)? ii) Consider the partition œòßþ&ó œðþ&ßþó. One Bayesian ay of inventing a test for H is to decide in favor if the posterior probability assigned is at least.5. Describe as explicitly as you can the subset of k here this is the case. 3

4 20. Consider the situation of problem 6 and maximum likelihood estimation of -. a) Sho that ith Qœ Ò 3's equal to Ó, in the event that Qœthere is no MLE of -, but that in all other cases there is a maximizer of the likelihood. Then argue that for any -, ith T probability tending to 1, the MLE of -, say - ^ -, exists. b) Give a simple estimator of - based on Q alone. Prove that this estimator is consistent for -. Then rite don an explicit one-step Neton modification of your estimator from a). c) Discuss hat numerical methods you could use to find the MLE from a) in the event that it exists. d) Give to forms of large sample confidence intervals for - based on the MLE - ^ and to different approximations to MÐ-ÑÞ 21. Consider the situation of Problems 6 and 20. Belo are some data artificially generated from an exponential distribution..24, 3.20,.14, 1.6,.5, 1.15,.32,.66, 1.60,.34,.61,.09, 1.1, 1.29,.23,.5,.11, 3.2, 2.53,. a) Plot the loglikelihood function for the uncensored data (the B values given above). Give approximate 90% to-sided confidence intervals for - based on the asymptotic ; distribution for the LRT statistic for testing H À - œ - vs H À - Á - and based on the asymptotic normal distribution of the MLE. No consider the censored data problem here any value less than 1 is reported as 0. Modify the above data accordingly and do the folloing. b) Plot the loglikelihood for the censored data (the derived B) values. Ho does this function of - compare to the one from part a)? It might be informative to plot these on the same set of axes. c) It turns out (you might derive this fact) that expð -Ñ MÐ -Ñœ Ð Ñ Þ Œ ˆ - exp - - expð -Ñ Give to different approximate 90% confidence intervals for - based on the asymptotic distribution of the MLE here. Then give an approximate 90% interval based on inverting the LRTs of H À - œ - vs H À - Á- Þ 22. Suppose that ßß $ and % are independent binomial random variables, 3 µ binomial Ðß:Ñ 3. Consider the problem of testing H À: œ: and : $ œ: % against the alternative that H does not hold. a) Find the form of the LRT of these hypotheses and sho that the log of the LRT statistic is the sum of the logs of independent LRT statistics for H À: œ: and H À: $ œ: % (a fact that might be useful in directly shoing the ; limit of the LRT statistic under the null hypothesis). b) Find the form of the Wald tests and sho directly that the test statistic is asymptotically ; under the null hypothesis. 4

5 c) Find the form of the ; (score) tests of these hypotheses and again sho directly that the test statistic is asymptotically ; under the null hypothesis. 23. Suppose that ßßÞÞÞ are iid, each taking values in k œößß ith R-N derivative of rt to counting measure on k T ) 0ÐBÑ œ ) expðb) Ñ exp) expð ) Ñ 3 3œ a) Find an estimator of ) based on œ MÒ œó that is È consistent (i.e. for hich ÈÐs ) ) Ñ converges in distribution). b) Find in more or less explicit form a one step Neton modification of your estimator from a). c) Prove directly that your estimator from b) is asymptotically normal ith variance ÎM Ð) Ñ. (With ) ^ the estimator from a) and ) the estimator from b), ) ) œs PÐs Ñ ), PÐ s) Ñ and rite PÐs ÑœPÐÑ Ðs ) ) ) ) ÑP ÐÑ ) Ðs) ) ÑP Ð) Ñ for some ) s beteen ) and ).) d) Sho that provided B ÐßÑ the loglikelihood has a maximizer B $B 'B )s œ log È Þ Ð BÑ Prove that an estimator defined to be )s hen B ÐßÑ ill be asymptotically normal ith variance ÎM Ð) Ñ. e) Sho that the observed information and expected information approximations lead to the same large sample confidence intervals for ). What do these look like based on, say,? )s By the ay, a version of nearly everything in this problem orks in any one parameter exponential family. 24. Prove the discrete version of Theorem

6 Assignment (Optional) Consider the problem of Bayesian inference for the binomial parameter :. In particular, for sake of convenience, consider the Uniform ÐßÑ (Beta Ð ß Ñ for œ œ) prior distribution. (a) It is possible to argue from reasonably elementary principles that in this binomial context, œðßñ, the Beta posteriors have a consistency property. That is, simple arguments can be used to sho that for any fixed any % 0, for µ binomial Ðß:Ñ, the random variable : % Ð Ñ ] œ ( : Ð :Ñ.: FÐ ß Ð ÑÑ : % (hich is the posterior probability assigned to the interval Ð: % ß: % Ñ) converges in : probability to 1 as p_. This problem is meant to lead you through this argument. Let % 0 and $ 0. B B i) Argue that there exists 7 such that if 7, ¹ ¹ ab œßßþþþßþ Ð BÑÐ BÑ ii) Note that the posterior variance is Ð Ñ Ð ÑÞ Argue there is an 7 such that if 7 the probability that the posterior assigns to B % B % Š 3ß 3 is at least $ ab œßßþþþß. iii) Argue that there is an 7 such that if 7 the : probability that : % is at least $. $ Then note that if maxð7ß7ß7ñi) and ii) together imply that the posterior probability assigned to ˆ B % B % $ ß $ is at least $ for any realization B. Then provided B : % $ the posterior probability assigned to Ð: % ß: % Ñ is also at least $ Þ But iii) says this happens ith : probability at least $. That is, for large, ith : probability at least $, ] $ Þ Since $ is arbitrary, (and ] Ÿ ) e have the convergence of ] to 1 in probability. : (b) Corollary 45 suggests that in an iid model, posterior densities for large tend to look normal (ith means and variances related to the likelihood material). The posteriors in this binomial problem are Beta Ð Bß Ð BÑÑ (and e can think of µ binomial Ðß:Ñ as derived as the sum of iid Bernoulli Ð: Ñ variables). So e ought to expect Beta distributions for large parameter values to look roughly normal. To illustrate this (and thus essentially hat Corollary 45 says in this case), do the folloing. For 3 œþ$ (for example... any other value ould do as ell), consider the Beta Ð 3 ß Ð 3ÑÑ (posterior) distributions for œßß% and. For : µ Beta Ð 3 ß Ð 3ÑÑ plot the probability densities for the variables % 3 : and 6

7 Ê a b Ð Ñ : on a single set of axes along ith the standard normal density. Note that if [ has pdf 0Ð Ñ, then +[, has pdf 1Ð Ñœ 0ˆ, + +. (Your plots are translated and rescaled posterior densities of : based onpossible observed values B œþ$.) If this is any help in doing this plotting, I tried to calculate values of the Beta function using MathCad and got the folloing: $ & ÐFÐ%ß)ÑÑ œþ$ ßÐFÐ(ß&ÑÑ œ)þ% ß ( ÐFÐ$ß*ÑÑ œþ* and ÐFÐ$ß(ÑÑ œþ*'(. 26. Problem 12, parts a)-c) Schervish, page Consider the folloing model. Given parameters - ßÞÞÞß -R variables ßÞÞÞßÞ R are independent Poisson variables, 3 µ PoissonÐ-3ÑÞ Qis a parameter taking values in ÖßßÞÞÞßR and if 3ŸQß - 3 œ., hile if 3 Qß - 3 œ. Þ ( Q is the number of Poisson means that are the same as that of the first observation.) With parameter vector ) œðqß. ß. Ñ belonging œößßþþþßr Ðß_Ñ Ðß_Ñ e ish to make inference about Q based on ßÞÞÞßÞ in a Bayesian frameork. R R 3œ As matters of notational convenience, let W œ and XœW. a) If, for example, a prior distribution K is constructed by taking Q uniform on ÖßßÞÞÞßR independent of. exponential ith mean 1, independent of. exponential ith mean 1 ß it is possible to explicitly find the (marginal) posterior of Q given that œbßþþþß R œbr. Don't actually bother to finish the somehat messy calculations needed to do this, but sho that this is possible (indicate clearly hy appropriate integrals can be evaluated explicitly). b) Suppose no that K is constructed by taking Q uniform on ÖßßÞÞÞßR independent of Ð. ß. Ñ ith joint density 1Ð ß Ñrt Lebesgue measure on (0,_) (0,_). Describe in as much detail as possible a SSS based method for approximating the posterior of Q, given œbßþþþß R œbr. (Give the necessary conditionals up to multiplicative constants, say ho you're going to use them and hat you'll do ith any vectors you produce by simulation.) 2. Consider the simple to-dimensional discrete distribution for ) œð) ß) Ñ given in the table belo. 7

8 ) ) a) This is obviously a very simple setup here anything one might ish to kno or say about the distribution is easily derivable from the table. Hoever, for sake of example, suppose that one ished to use SSS to approximate UœTÒ) ŸÓ œþ'. Argue carefully that Gibbs Sampling (SSS) ill fail here to produce a correct estimate of U. What is it about the transition matrix that describes SSS here that causes this failure? b) Consider the very simple version of the Metropolis-Hastings algorithm here the matrix XœÐ> )) ß Ñ is ) 1(for 1 an ) ) matrix of 's) and ) and ) are vectors for hich the corresponding probabilities in the table above are positive. Find the transition matrix T and verify that it has the distribution above as it invariant distribution. 29. Suppose for sake of argument that I am interested in the properties of the discrete distribution ith probability mass function but, sadly, I don't kno 5. sinb 0ÐBÑ œ 5 for œ Bœßßá B otherise a) Describe a MCMC method I could use to approximate the mean of this distribution. (b) One doesn't really have to resort to MCMC here. One could generate iid observations from 0ÐBÑ (instead of a MC ith stationary distribution 0ÐBÑ) using the rejection method. (And then rely on the LLN to produce an approximate mean for the distribution.) Prove that the folloing algorithm (based on iid Uniform ÐßÑ random variables and samples from a geometric distribution) ill produce from the this distribution. Step 1 Generate from the geometric distribution ith probability mass function Bœßßá 2ÐBÑ œ for œ B otherise Step 2 Generate Y that is Uniform ÐßÑ sinb sinb Step 3 Note that 2ÐBÑ B. If Y2ÐBÑ B set œ. Otherise return to Step 1.

9 30. Suppose that f is a convex subset of Òß_Ñ 5. Argue that there exists a decision problem that has f as its set of risk vectors. (Consider problems here is degenerate and carries no information.) 31. Consider the to state decision problem œöß, T the Bernoulli ˆ % distribution and T the Bernoulli ˆ distribution, T and PÐ) ß+Ñ œmò) Á+ÓÞ a) Find the set of risk vectors for the four nonrandomized decision rules. Plot these in the plane. Sketch f, the risk set for this problem. b) For this problem, sho explicitly that any element of W has a corresponding element of W ith identical risk vector and vice versa. c) Identify the set of all admissible risk vectors for this problem. Is there a minimal complete class for this decision problem? If there is one, hat is it? d) For each : ÒßÓ, identify those risk vectors that are Bayes versus the prior 1 œð:ß :Ñ. For hich priors are there more than one Bayes rule? e) Verify directly that the prescription choose an action that minimizes the posterior expected loss produces a Bayes rule versus the prior 1 œð ß ÑÞ 32. Consider a to state decision problem œöß, here the observable œð ßÑ has iid Bernoulli ˆ % coordinates if ) œ and iid Bernoulli ˆ coordinates if ) œ. Suppose that T and PÐ) ß+Ñ œmò) Á+Ó. Consider the behavioral decision rule 9 B defined by 9 B ÐÖ Ñ œ if B œ, and 9BÐÖ Ñ œ and 9BÐÖ Ñ œ if B œ. a) Sho that 9 B is inadmissible by finding a rule ith a better risk function. (It may be helpful to figure out hat the risk set is for this problem, in a manner similar to hat you did in problem 31.) b) Use the construction from Result 61 and find a behavioral decision rule that is a function of the sufficient statistic and is risk equivalent to 9 B Þ (Note that this rule is inadmissible.) 33. Consider the squared error loss estimation of : ÐßÑ, based on µ binomial B Ðß:Ñ, and the to nonrandomized decision rules $ ÐBÑ œ and $ ÐBÑ œ ˆ B Þ Let < be a randomized decision function that chooses $ ith probability and $ ith probability Þ a) Write out expressions for the risk functions of $, $ and <. b) Find a behavioral rule that is risk equivalent to <. c) Identify a nonrandomized estimator that is strictly better than < or Suppose œ and that a decision rule 9 is such that for each ) ß9is admissible hen the parameter space Ö). Sho that 9 is then admissible hen the parameter space 9

10 35. Suppose that AÐ) Ñ a) Þ Sho that 9 is admissible ith loss function PÐ) ß+Ñ iff it is admissible ith loss function AÐ) ÑPÐ) ß+Ñ. 36. Problem 9, page 209 of Schervish. 10

11 Assignment (Ferguson) Prove or give a counterexample: If V and V are complete classes of decision rules, then V V is essentially complete. 3. (Ferguson and others) Suppose that µ binomial Ðß:Ñ and one ishes to estimate : ÐßÑ. Suppose first that PÐ:ß+Ñ œ: Ð :Ñ Ð: +Ñ Þ a) Sho that is Bayes versus the uniform prior on ÐßÑ. b) Argue that is admissible in this decision problem. c) Sho that is minimax and identify a least favorable prior No consider ordinary squared error loss, PÐ:ß+Ñ œð: +Ñ. d) Apply the result of problem 35 and prove that is admissible under this loss function as ell. 39. Consider a to state decision problem œ T œöß, T0 and T have respective densities ith respect to a dominating 5 -finite measure., 0 and 0 and the loss function is PÐ) ß+Ñ. a) For K an arbitrary prior distribution, find a formal Bayes rule versus KÞ b) Specialize your result from a) to the case here PÐ) ß+Ñ œmò) Á+Ó. What connection does the form of these rules have to the theory of simple versus simple hypothesis testing? 40. Suppose that µ Bernoulli Ð:Ñ and that one ishes to estimate : ith loss $ PÐ:ß+Ñ œl: +l. Consider the estimator $ ith $ ÐÑ œ % and $ ÐÑ œ %. a) Write out the risk function for $ and sho that VÐ:ß$ ÑŸ % Þ b) Sho that there is a prior distribution placing all its mass on Öß ß against hich $ is Bayes. c) Prove that $ is minimax in this problem and identify a least favorable prior. 41. Consider a decision problem here T) is the Normal Ð) ßÑ distribution on k= e, T œöß and PÐ) ß+Ñ œmò+ œómò) &Ó MÒ+ œómò) Ÿ&Ó. a) œð _ß&Ó Ò'ß_Ñ guess hat prior distribution is least favorable, find the corresponding Bayes decision rule and prove that it is minimax. b) œ e, guess hat decision rule might be minimax, find its risk function and prove that it is minimax. 42. (Ferguson and others) Consider (inverse mean) eighted squared error loss estimation of -, the mean of a Poisson distribution. That is, let A œðß_ñß T œ AßT - be the Poisson distribution on k œößßß$ßþþþ and PÐ-ß+Ñ œ - Ð- +Ñ ÞLet $ÐÑ œþ a) Sho that $ is an equalizer rule. b) Sho that $ is generalized Bayes versus Lebesgue measure on A. c) Find the Bayes estimators rt the > Ð, Ñ priors on A. d) Prove that $ is minimax for this problem. 11

12 43. Problem 14, page 340 Schervish. 44. Let RœÐR ßRßÞÞÞßRÑ be multinomial Ðß:ß:ßÞÞÞß:Ñ here : œþ a) Find the form of the Fisher information matrix based on the parameter :. b) Suppose that :Ð 3 ) Ñ for 3œßßÞÞÞß5are differentiable functions of a real parameter and open interval, here each :Ð) Ñ and :Ð) ÑœÞ 3 3 Suppose that 2ÐC ßCßÞÞÞßCÑ 5 is a continuous real-valued function ith continuous first partial derivatives and define ;Ð) Ñœ2Ð: Ð) Ñß:Ð ) ÑßÞÞÞß:ÐÑÑ 5 ). Sho that the information bound for unbiased estimators of ;Ð) Ñin this context is Ð; Ð) ÑÑ. here MÐ ) Ñœ :Ð 3 ) Ñ Œ log:ð 3 ) Ñ Þ M Ð) Ñ.) 5 3œ 45. Sho that the C-R inequality is not changed by a smooth reparameterization. That is, suppose that c œöt ) is dominated by a 5 -finite measure. and satisfies i) 0ÐBÑ ) for all ) and B,. ii) for all B,.) 0ÐBÑ ) exists and is finite everyhere e and iii) for any statistic $ ith E ) l$ ÐÑl _ for all ), E ) $ ÐÑ can be differentiated under the integral sign at all points Let 2 be a function to e such that 2 is continuous and nonvanishing Let ( œ2ð) Ñ and define U( œt). Sho that the information inequality bound obtained from ÖU ( evaluated at ( œ2ð) Ñis the same as the bound obtained from c. 46. As an application of Lemma (or ) consider the folloing. Suppose that ) ) µ U Ð) ß) Ñ and consider unbiased estimators of ) Ð ß) Ñœ. For ) ) ) 0 Ð) ß ÑÐBÑ 0Ð ß ÐBÑ ) ) ) Ñ 0 ÐBÑ 0 ÐBÑ and ) ) ), apply Lemma ith 1ÐBÑ œ and 1ÐBÑ œ Þ Ð) ß) Ñ Ð) ß) Ñ What does Lemma then say about the variance of an unbiased estimator of ) ( ß) Ñ? (If you can do it, sup over values of ) and ). I've not tried this, so am not sure ho it comes out.) 47. Problem 11, page 39 Schervish. 4. Consider the estimation problem ith iid T) observations, here T) is the exponential distribution ith mean ). Let Z be the group of scale transformations on k œðß_ñ ß Z œö1-l- here 1ÐBÑ - œ-bþ a) Sho that the estimation problem ith loss PÐ) ß+Ñ œðlog Ð+Î) ÑÑ is invariant under Z and say hat relationship any equivariant nonrandomized decision rule must satisfy. b) Sho that the estimation problem ith loss PÐ) ß+Ñ œð) +Ñ Î) is invariant under Z, and say hat relationship any equivariant nonrandomized estimator must satisfy. 12

13 c) Find the generalized Bayes estimator of ) in situation b) if the prior has density rt Lebesgue measure 1Ð) Ñœ ) MÒ) Ó. Argue that this estimator is the best equivariant estimator in the situation of b). 13

14 Assignment Problem 19, page 341 Schervish. Also, find the MRE estimator of ) under squared error loss for this model. 50. Let 0Ð?ß@Ñ be the bivariate probability density of the distribution uniform on the square in Ð?ß@Ñ-space ith corners at Ð ÈßÑßÐß ÈÑßÐ ÈßÑand Ðß ÈÑ. Suppose that œð ßÑ has bivariate probability density 0ÐBl) Ñœ0ÐB ) ßB ) Ñ. Find explicitly the best location equivariant estimator of ) under squared error loss. (It may ell help you to visualize this problem to sketch the joint density here for ) œ(.) 51. Consider again the scenario of problem 31. There you sketched the risk set f. Here sketch the corresponding set i. 52. Prove the folloing filling-in lemma: Lemma Suppose that 1 and 1 are to distinct, positive probability densities defined on an interval in e Þ If the ratio 1Î1 is nondecreasing in a real-valued function XÐBÑ, then the family of densities Ö1 ÒßÓ for 1 œ 1 Ð Ñ1 has the MLR property in XÐBÑ. 53. Consider the to distributions T and T on k œðßñ ith densities rt Lebesgue measure 0ÐBÑ œ and 0ÐBÑ œ$b. a) Find a most poerful level œþ test of H ÀµT vs H ÀµT. b) Plot both the - loss risk set f and the set i œö 9 Ð) Ñß) Ð Ñ for the simple versus testing problem involving 0 and 0. What test is Bayes versus a uniform prior? This test is best of hat size? c) Take the result of Problem 52 as given. Consider the family of mixture distributions c œöt ) here for ) ÒßÓ, T) œð ) ÑT ) T. In this family find a UMP level œþ test of the hypothesis H À ) ŸÞ& vs H À ) Þ& based on a single observation. Argue carefully that your test is really UMP. 54. Consider a composite versus composite testing problem in a family of distributions c œöt ) dominated by the 5 -finite measure., and specifically a Bayesian decisiontheoretic approach to this problem ith prior distribution K under - loss. a) Sho that if KÐ@ Ñœ then 9 ÐBÑ» is Bayes, hile if KÐ@ 1Ñœ then 9ÐBÑ»0is Bayes. b) Sho that if KÐ@ Ñ and KÐ@ 1Ñ then the Bayes test against K has Neyman-Pearson form for densities 1 and 1 on defined by k 14

15 1ÐBÑ œ ' 0ÐBÑ.KÐ) Ñ ' 0ÐBÑ.KÐ) Ñ 1 and 1ÐBÑ 1 œ. KÐ@ Ñ KÐ@ ) 1 c) Suppose that is Normal (),1). Find Bayes tests of H À ) œ vs H À ) Á i) supposing that K is the Normal Ðß5 Ñ distribution, and ii) supposing that K is ÐR? Ñ, for R the Normal Ðß5 Ñ distribution, and? a point mass distribution at Fix ÐßÞ&Ñ and - Ð ß Ñ. œö ÒßÓ and consider the discrete distributions ith probability mass functions as belo. B ) œ ) Á )- ˆ - ˆ ˆ - ˆ - ˆ a ) b- Find the size likelihood ratio test of H À ) œ vs H À ) Á. Sho that the test 9 ÐBÑ œmòb œó is of size and is strictly more poerful that the LRT hatever be ). (This simple example shos that LRTs need not necessarily be in any sense optimal.) 56. Suppose that is Poisson ith mean ). Find the UMP test of size œþ& of H À ) Ÿ% or ) 10 vsh À% ). The folloing table of Poisson probabilities TÒ ) œbó ill be useful in this enterprise. B $ % & ' ( ) * ) œ% Þ)$ Þ($$ Þ%'& Þ*&% Þ*&% Þ&'$ Þ% Þ&*& Þ*) Þ$ Þ&$ Þ* Þ' ) œ Þ Þ& Þ$ Þ(' Þ)* Þ$() Þ'$ Þ* Þ' Þ& Þ& Þ$( Þ*%) B $ % & ' ( ) * $ % & ) œ% Þ Þ Þ ) œ Þ(* Þ& Þ$%( Þ( Þ) Þ( Þ$( Þ* Þ* Þ% Þ Þ Þ 57. Suppose that is exponential ith mean - Þ Set up the to equations that ill have to be solved simultaneously in order to find UMP size test of H À - ŸÞ& or - 2 vs H À - ÐÞ&ßÑ. 5. Consider the situation of problem 23. Find the form of UMP tests of H À ) Ÿ ) vs H À ) ) Þ Discuss ho you ould go about choosing an appropriate constant to produce a test of approximately size. Discuss ho you ould go about finding an optimal (approximately) 90% confidence set for ). 59. (Problem 37, page 122 TSH.) Let ßÞÞÞß and ]ßÞÞÞß] 7 be independent samples from N ÐßÑ 0 and N Ð( ßÑ, and consider the hypotheses H À ( Ÿ 0 and H À ( 0Þ Sho that there exists a UMP test of size, and it rejects H hen ] is too large. (If 0 ( is a particular alternative, the distribution assigning probability 1 to the point ith ( œ 0 œð70 ( ÑÎÐ7 Ñ is least favorable at size.) 60. Problem 57 Schervish, page

Stat 643 Problems. a) Show that V is a sub 5-algebra of T. b) Let \ be an integrable random variable. Find EÐ\lVÑÞ

Stat 643 Problems. a) Show that V is a sub 5-algebra of T. b) Let \ be an integrable random variable. Find EÐ\lVÑÞ Stat 643 Problems 1. Let ( HT, ) be a measurable space. Suppose that., T and Uare 5-finite positive measures on this space with T Uand U...T.T.U a) Show that T. and œ a.s......u.....u.u Š.. b) Show that

More information

QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, MAY 9, Examiners: Drs. K. M. Ramachandran and G. S. Ladde

QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, MAY 9, Examiners: Drs. K. M. Ramachandran and G. S. Ladde QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, MAY 9, 2015 Examiners: Drs. K. M. Ramachandran and G. S. Ladde INSTRUCTIONS: a. The quality is more important than the quantity. b. Attempt

More information

DEPARTMENT OF MATHEMATICS AND STATISTICS QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, SEPTEMBER 24, 2016

DEPARTMENT OF MATHEMATICS AND STATISTICS QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, SEPTEMBER 24, 2016 DEPARTMENT OF MATHEMATICS AND STATISTICS QUALIFYING EXAMINATION II IN MATHEMATICAL STATISTICS SATURDAY, SEPTEMBER 24, 2016 Examiners: Drs. K. M. Ramachandran and G. S. Ladde INSTRUCTIONS: a. The quality

More information

Stat 643 Results. Roughly ßXÐ\Ñ is sufficient for ÖT ) (or for )) provided the conditional distribution of \ given XÐ\Ñ is the same for each

Stat 643 Results. Roughly ßXÐ\Ñ is sufficient for ÖT ) (or for )) provided the conditional distribution of \ given XÐ\Ñ is the same for each Stat 643 Results Sufficiency (and Ancillarity and Completeness Roughly ßXÐ\Ñ is sufficient for ÖT @ (or for provided the conditional distribution of \ given XÐ\Ñ is the same for each @ Example Let \œð\

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

Part I consists of 14 multiple choice questions (worth 5 points each) and 5 true/false question (worth 1 point each), for a total of 75 points.

Part I consists of 14 multiple choice questions (worth 5 points each) and 5 true/false question (worth 1 point each), for a total of 75 points. Math 131 Exam 1 Solutions Part I consists of 14 multiple choice questions (orth 5 points each) and 5 true/false question (orth 1 point each), for a total of 75 points. 1. The folloing table gives the number

More information

Math 131 Exam 4 (Final Exam) F04M

Math 131 Exam 4 (Final Exam) F04M Math 3 Exam 4 (Final Exam) F04M3.4. Name ID Number The exam consists of 8 multiple choice questions (5 points each) and 0 true/false questions ( point each), for a total of 00 points. Mark the correct

More information

Lecture 12 November 3

Lecture 12 November 3 STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1

More information

SIMULTANEOUS CONFIDENCE BANDS FOR THE PTH PERCENTILE AND THE MEAN LIFETIME IN EXPONENTIAL AND WEIBULL REGRESSION MODELS. Ping Sa and S.J.

SIMULTANEOUS CONFIDENCE BANDS FOR THE PTH PERCENTILE AND THE MEAN LIFETIME IN EXPONENTIAL AND WEIBULL REGRESSION MODELS. Ping Sa and S.J. SIMULTANEOUS CONFIDENCE BANDS FOR THE PTH PERCENTILE AND THE MEAN LIFETIME IN EXPONENTIAL AND WEIBULL REGRESSION MODELS " # Ping Sa and S.J. Lee " Dept. of Mathematics and Statistics, U. of North Florida,

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano, 02LEu1 ttd ~Lt~S Testing Statistical Hypotheses Third Edition With 6 Illustrations ~Springer 2 The Probability Background 28 2.1 Probability and Measure 28 2.2 Integration.........

More information

Product Measures and Fubini's Theorem

Product Measures and Fubini's Theorem Product Measures and Fubini's Theorem 1. Product Measures Recall: Borel sets U in are generated by open sets. They are also generated by rectangles VœN " á N hich are products of intervals NÞ 3 Let V be

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Wednesday, October 5, 2011 Lecture 13: Basic elements and notions in decision theory Basic elements X : a sample from a population P P Decision: an action

More information

EXCERPTS FROM ACTEX CALCULUS REVIEW MANUAL

EXCERPTS FROM ACTEX CALCULUS REVIEW MANUAL EXCERPTS FROM ACTEX CALCULUS REVIEW MANUAL Table of Contents Introductory Comments SECTION 6 - Differentiation PROLEM SET 6 TALE OF CONTENTS INTRODUCTORY COMMENTS Section 1 Set Theory 1 Section 2 Intervals,

More information

Théorie Analytique des Probabilités

Théorie Analytique des Probabilités Théorie Analytique des Probabilités Pierre Simon Laplace Book II 5 9. pp. 203 228 5. An urn being supposed to contain the number B of balls, e dra from it a part or the totality, and e ask the probability

More information

PART I. Multiple choice. 1. Find the slope of the line shown here. 2. Find the slope of the line with equation $ÐB CÑœ(B &.

PART I. Multiple choice. 1. Find the slope of the line shown here. 2. Find the slope of the line with equation $ÐB CÑœ(B &. Math 1301 - College Algebra Final Exam Review Sheet Version X This review, while fairly comprehensive, should not be the only material used to study for the final exam. It should not be considered a preview

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80

The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 The University of Hong Kong Department of Statistics and Actuarial Science STAT2802 Statistical Models Tutorial Solutions Solutions to Problems 71-80 71. Decide in each case whether the hypothesis is simple

More information

y(x) = x w + ε(x), (1)

y(x) = x w + ε(x), (1) Linear regression We are ready to consider our first machine-learning problem: linear regression. Suppose that e are interested in the values of a function y(x): R d R, here x is a d-dimensional vector-valued

More information

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Logistic Regression. Machine Learning Fall 2018

Logistic Regression. Machine Learning Fall 2018 Logistic Regression Machine Learning Fall 2018 1 Where are e? We have seen the folloing ideas Linear models Learning as loss minimization Bayesian learning criteria (MAP and MLE estimation) The Naïve Bayes

More information

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition. Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface

More information

SIMULATION - PROBLEM SET 1

SIMULATION - PROBLEM SET 1 SIMULATION - PROBLEM SET 1 " if! Ÿ B Ÿ 1. The random variable X has probability density function 0ÐBÑ œ " $ if Ÿ B Ÿ.! otherwise Using the inverse transform method of simulation, find the random observation

More information

Chapter Eight The Formal Structure of Quantum Mechanical Function Spaces

Chapter Eight The Formal Structure of Quantum Mechanical Function Spaces Chapter Eight The Formal Structure of Quantum Mechanical Function Spaces Introduction In this chapter, e introduce the formal structure of quantum theory. This formal structure is based primarily on the

More information

LECTURE NOTES 57. Lecture 9

LECTURE NOTES 57. Lecture 9 LECTURE NOTES 57 Lecture 9 17. Hypothesis testing A special type of decision problem is hypothesis testing. We partition the parameter space into H [ A with H \ A = ;. Wewrite H 2 H A 2 A. A decision problem

More information

Unobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior:

Unobservable Parameter. Observed Random Sample. Calculate Posterior. Choosing Prior. Conjugate prior. population proportion, p prior: Pi Priors Unobservable Parameter population proportion, p prior: π ( p) Conjugate prior π ( p) ~ Beta( a, b) same PDF family exponential family only Posterior π ( p y) ~ Beta( a + y, b + n y) Observed

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Bayesian Properties of Normalized Maximum Likelihood and its Fast Computation

Bayesian Properties of Normalized Maximum Likelihood and its Fast Computation Bayesian Properties of Normalized Maximum Likelihood and its Fast Computation Andre Barron Department of Statistics Yale University Email: andre.barron@yale.edu Teemu Roos HIIT & Dept of Computer Science

More information

8œ! This theorem is justified by repeating the process developed for a Taylor polynomial an infinite number of times.

8œ! This theorem is justified by repeating the process developed for a Taylor polynomial an infinite number of times. Taylor and Maclaurin Series We can use the same process we used to find a Taylor or Maclaurin polynomial to find a power series for a particular function as long as the function has infinitely many derivatives.

More information

Notes on Decision Theory and Prediction

Notes on Decision Theory and Prediction Notes on Decision Theory and Prediction Ronald Christensen Professor of Statistics Department of Mathematics and Statistics University of New Mexico October 7, 2014 1. Decision Theory Decision theory is

More information

You may have a simple scientific calculator to assist with arithmetic, but no graphing calculators are allowed on this exam.

You may have a simple scientific calculator to assist with arithmetic, but no graphing calculators are allowed on this exam. Math 131 Exam 3 Solutions You may have a simple scientific calculator to assist ith arithmetic, but no graphing calculators are alloed on this exam. Part I consists of 14 multiple choice questions (orth

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory Statistical Inference Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory IP, José Bioucas Dias, IST, 2007

More information

Modèles stochastiques II

Modèles stochastiques II Modèles stochastiques II INFO 154 Gianluca Bontempi Département d Informatique Boulevard de Triomphe - CP 1 http://ulbacbe/di Modéles stochastiques II p1/50 The basics of statistics Statistics starts ith

More information

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

Stat 8112 Lecture Notes Weak Convergence in Metric Spaces Charles J. Geyer January 23, Metric Spaces

Stat 8112 Lecture Notes Weak Convergence in Metric Spaces Charles J. Geyer January 23, Metric Spaces Stat 8112 Lecture Notes Weak Convergence in Metric Spaces Charles J. Geyer January 23, 2013 1 Metric Spaces Let X be an arbitrary set. A function d : X X R is called a metric if it satisfies the folloing

More information

The Rational Numbers

The Rational Numbers The Rational Numbers Fields The system of integers that e formally defined is an improvement algebraically on the hole number system = (e can subtract in ) But still has some serious deficiencies: for

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Section 1.3 Functions and Their Graphs 19

Section 1.3 Functions and Their Graphs 19 23. 0 1 2 24. 0 1 2 y 0 1 0 y 1 0 0 Section 1.3 Functions and Their Graphs 19 3, Ÿ 1, 0 25. y œ 26. y œ œ 2, 1 œ, 0 Ÿ " 27. (a) Line through a!ß! band a"ß " b: y œ Line through a"ß " band aß! b: y œ 2,

More information

Review of Elementary Probability Theory Math 495, Spring 2011

Review of Elementary Probability Theory Math 495, Spring 2011 Revie of Elementary Probability Theory Math 495, Spring 2011 0.1 Probability spaces 0.1.1 Definition. A probability space is a triple ( HT,,P) here ( 3Ñ H is a non-empty setà Ð33Ñ T is a collection of

More information

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t )

Lecture 4. f X T, (x t, ) = f X,T (x, t ) f T (t ) LECURE NOES 21 Lecture 4 7. Sufficient statistics Consider the usual statistical setup: the data is X and the paramter is. o gain information about the parameter we study various functions of the data

More information

SVM example: cancer classification Support Vector Machines

SVM example: cancer classification Support Vector Machines SVM example: cancer classification Support Vector Machines 1. Cancer genomics: TCGA The cancer genome atlas (TCGA) will provide high-quality cancer data for large scale analysis by many groups: SVM example:

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics October 17, 2017 CS 361: Probability & Statistics Inference Maximum likelihood: drawbacks A couple of things might trip up max likelihood estimation: 1) Finding the maximum of some functions can be quite

More information

HYPOTHESIS TESTING: FREQUENTIST APPROACH.

HYPOTHESIS TESTING: FREQUENTIST APPROACH. HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous

More information

Example Let VœÖÐBßCÑ À b-, CœB - ( see the example above). Explain why

Example Let VœÖÐBßCÑ À b-, CœB - ( see the example above). Explain why Definition If V is a relation from E to F, then a) the domain of V œ dom ÐVÑ œ Ö+ E À b, F such that Ð+ß,Ñ V b) the range of V œ ran( VÑ œ Ö, F À b+ E such that Ð+ß,Ñ V " c) the inverse relation of V œ

More information

OUTLINING PROOFS IN CALCULUS. Andrew Wohlgemuth

OUTLINING PROOFS IN CALCULUS. Andrew Wohlgemuth OUTLINING PROOFS IN CALCULUS Andre Wohlgemuth ii OUTLINING PROOFS IN CALCULUS Copyright 1998, 001, 00 Andre Wohlgemuth This text may be freely donloaded and copied for personal or class use iii Contents

More information

STATISTICS SYLLABUS UNIT I

STATISTICS SYLLABUS UNIT I STATISTICS SYLLABUS UNIT I (Probability Theory) Definition Classical and axiomatic approaches.laws of total and compound probability, conditional probability, Bayes Theorem. Random variable and its distribution

More information

A Generalization of a result of Catlin: 2-factors in line graphs

A Generalization of a result of Catlin: 2-factors in line graphs AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 72(2) (2018), Pages 164 184 A Generalization of a result of Catlin: 2-factors in line graphs Ronald J. Gould Emory University Atlanta, Georgia U.S.A. rg@mathcs.emory.edu

More information

STAT 2122 Homework # 3 Solutions Fall 2018 Instr. Sonin

STAT 2122 Homework # 3 Solutions Fall 2018 Instr. Sonin STAT 2122 Homeork Solutions Fall 2018 Instr. Sonin Due Friday, October 5 NAME (25 + 5 points) Sho all ork on problems! (4) 1. In the... Lotto, you may pick six different numbers from the set Ö"ß ß $ß ÞÞÞß

More information

Math 131, Exam 2 Solutions, Fall 2010

Math 131, Exam 2 Solutions, Fall 2010 Math 131, Exam 2 Solutions, Fall 2010 Part I consists of 14 multiple choice questions (orth 5 points each) and 5 true/false questions (orth 1 point each), for a total of 75 points. Mark the correct anser

More information

Chapter 2. Review of basic Statistical methods 1 Distribution, conditional distribution and moments

Chapter 2. Review of basic Statistical methods 1 Distribution, conditional distribution and moments Chapter 2. Review of basic Statistical methods 1 Distribution, conditional distribution and moments We consider two kinds of random variables: discrete and continuous random variables. For discrete random

More information

Here are proofs for some of the results about diagonalization that were presented without proof in class.

Here are proofs for some of the results about diagonalization that were presented without proof in class. Suppose E is an 8 8 matrix. In what follows, terms like eigenvectors, eigenvalues, and eigenspaces all refer to the matrix E. Here are proofs for some of the results about diagonalization that were presented

More information

Example: A Markov Process

Example: A Markov Process Example: A Markov Process Divide the greater metro region into three parts: city such as St. Louis), suburbs to include such areas as Clayton, University City, Richmond Heights, Maplewood, Kirkwood,...)

More information

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution Outline A short review on Bayesian analysis. Binomial, Multinomial, Normal, Beta, Dirichlet Posterior mean, MAP, credible interval, posterior distribution Gibbs sampling Revisit the Gaussian mixture model

More information

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis Lecture 3 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,

More information

Dr. H. Joseph Straight SUNY Fredonia Smokin' Joe's Catalog of Groups: Direct Products and Semi-direct Products

Dr. H. Joseph Straight SUNY Fredonia Smokin' Joe's Catalog of Groups: Direct Products and Semi-direct Products Dr. H. Joseph Straight SUNY Fredonia Smokin' Joe's Catalog of Groups: Direct Products and Semi-direct Products One of the fundamental problems in group theory is to catalog all the groups of some given

More information

NOVEMBER 2005 EXAM NOVEMBER 2005 CAS COURSE 3 EXAM SOLUTIONS 1. Maximum likelihood estimation: The log of the density is 68 0ÐBÑ œ 68 ) Ð) "Ñ 68 B. The loglikelihood function is & j œ 68 0ÐB Ñ œ & 68 )

More information

Statistical Data Analysis Stat 3: p-values, parameter estimation

Statistical Data Analysis Stat 3: p-values, parameter estimation Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Lecture notes on statistical decision theory Econ 2110, fall 2013

Lecture notes on statistical decision theory Econ 2110, fall 2013 Lecture notes on statistical decision theory Econ 2110, fall 2013 Maximilian Kasy March 10, 2014 These lecture notes are roughly based on Robert, C. (2007). The Bayesian choice: from decision-theoretic

More information

Inner Product Spaces

Inner Product Spaces Inner Product Spaces In 8 X, we defined an inner product? @? @?@ ÞÞÞ? 8@ 8. Another notation sometimes used is? @? ß@. The inner product in 8 has several important properties ( see Theorem, p. 33) that

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

Notes on the Unique Extension Theorem. Recall that we were interested in defining a general measure of a size of a set on Ò!ß "Ó.

Notes on the Unique Extension Theorem. Recall that we were interested in defining a general measure of a size of a set on Ò!ß Ó. 1. More on measures: Notes on the Unique Extension Theorem Recall that we were interested in defining a general measure of a size of a set on Ò!ß "Ó. Defined this measure T. Defined TÐ ( +ß, ) Ñ œ, +Þ

More information

Definition Suppose M is a collection (set) of sets. M is called inductive if

Definition Suppose M is a collection (set) of sets. M is called inductive if Definition Suppose M is a collection (set) of sets. M is called inductive if a) g M, and b) if B Mß then B MÞ Then we ask: are there any inductive sets? Informally, it certainly looks like there are. For

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes

UC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 26: Probability and Random Processes Problem Set Spring 209 Self-Graded Scores Due:.59 PM, Monday, February 4, 209 Submit your

More information

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses. Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each

More information

STAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed.

STAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed. STAT 302 Introduction to Probability Learning Outcomes Textbook: A First Course in Probability by Sheldon Ross, 8 th ed. Chapter 1: Combinatorial Analysis Demonstrate the ability to solve combinatorial

More information

Topic 10: Hypothesis Testing

Topic 10: Hypothesis Testing Topic 10: Hypothesis Testing Course 003, 2016 Page 0 The Problem of Hypothesis Testing A statistical hypothesis is an assertion or conjecture about the probability distribution of one or more random variables.

More information

Machine Learning: Logistic Regression. Lecture 04

Machine Learning: Logistic Regression. Lecture 04 Machine Learning: Logistic Regression Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Supervised Learning Task = learn an (unkon function t : X T that maps input

More information

Engineering Mathematics (E35 317) Final Exam December 15, 2006

Engineering Mathematics (E35 317) Final Exam December 15, 2006 Engineering Mathematics (E35 317) Final Exam December 15, 2006 This exam contains six free-resonse roblems orth 36 oints altogether, eight short-anser roblems orth one oint each, seven multile-choice roblems

More information

Introduction to Bayesian Statistics

Introduction to Bayesian Statistics Bayesian Parameter Estimation Introduction to Bayesian Statistics Harvey Thornburg Center for Computer Research in Music and Acoustics (CCRMA) Department of Music, Stanford University Stanford, California

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Hypothesis testing is a statistical problem where you must choose, on the basis of data X, between two alternatives. We formalize this as the problem of choosing between two

More information

The Hahn-Banach and Radon-Nikodym Theorems

The Hahn-Banach and Radon-Nikodym Theorems The Hahn-Banach and Radon-Nikodym Theorems 1. Signed measures Definition 1. Given a set Hand a 5-field of sets Y, we define a set function. À Y Ä to be a signed measure if it has all the properties of

More information

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Theory of testing hypotheses X: a sample from a population P in P, a family of populations. Based on the observed X, we test a

More information

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review

STATS 200: Introduction to Statistical Inference. Lecture 29: Course review STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout

More information

STAT 830 Hypothesis Testing

STAT 830 Hypothesis Testing STAT 830 Hypothesis Testing Richard Lockhart Simon Fraser University STAT 830 Fall 2018 Richard Lockhart (Simon Fraser University) STAT 830 Hypothesis Testing STAT 830 Fall 2018 1 / 30 Purposes of These

More information

Some General Types of Tests

Some General Types of Tests Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

Consider this problem. A person s utility function depends on consumption and leisure. Of his market time (the complement to leisure), h t

Consider this problem. A person s utility function depends on consumption and leisure. Of his market time (the complement to leisure), h t VI. INEQUALITY CONSTRAINED OPTIMIZATION Application of the Kuhn-Tucker conditions to inequality constrained optimization problems is another very, very important skill to your career as an economist. If

More information

1. Classify each number. Choose all correct answers. b. È # : (i) natural number (ii) integer (iii) rational number (iv) real number

1. Classify each number. Choose all correct answers. b. È # : (i) natural number (ii) integer (iii) rational number (iv) real number Review for Placement Test To ypass Math 1301 College Algebra Department of Computer and Mathematical Sciences University of Houston-Downtown Revised: Fall 2009 PLEASE READ THE FOLLOWING CAREFULLY: 1. The

More information

EXST Regression Techniques Page 1 SIMPLE LINEAR REGRESSION WITH MATRIX ALGEBRA

EXST Regression Techniques Page 1 SIMPLE LINEAR REGRESSION WITH MATRIX ALGEBRA EXST7034 - Regression Techniques Page 1 SIMPLE LINEAR REGRESSION WITH MATRIX ALGEBRA MODEL: Y 3 = "! + "" X 3 + % 3 MATRIX MODEL: Y = XB + E Ô Y" Ô 1 X" Ô e" Y# 1 X# b! e# or Ö Ù = Ö Ù Ö Ù b ã ã ã " ã

More information

CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum

CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum 1997 19 CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION 3.0. Introduction

More information

Statistical machine learning and kernel methods. Primary references: John Shawe-Taylor and Nello Cristianini, Kernel Methods for Pattern Analysis

Statistical machine learning and kernel methods. Primary references: John Shawe-Taylor and Nello Cristianini, Kernel Methods for Pattern Analysis Part 5 (MA 751) Statistical machine learning and kernel methods Primary references: John Shawe-Taylor and Nello Cristianini, Kernel Methods for Pattern Analysis Christopher Burges, A tutorial on support

More information

236 Chapter 4 Applications of Derivatives

236 Chapter 4 Applications of Derivatives 26 Chapter Applications of Derivatives Î$ &Î$ Î$ 5 Î$ 0 "Î$ 5( 2) $È 26. (a) g() œ ( 5) œ 5 Ê g () œ œ Ê critical points at œ 2 and œ 0 Ê g œ ± )(, increasing on ( _ß 2) and (!ß _), decreasing on ( 2 ß!)!

More information

It's Only Fitting. Fitting model to data parameterizing model estimating unknown parameters in the model

It's Only Fitting. Fitting model to data parameterizing model estimating unknown parameters in the model It's Only Fitting Fitting model to data parameterizing model estimating unknown parameters in the model Likelihood: an example Cohort of 8! individuals observe survivors at times >œ 1, 2, 3,..., : 8",

More information

Relations. Relations occur all the time in mathematics. For example, there are many relations between # and % À

Relations. Relations occur all the time in mathematics. For example, there are many relations between # and % À Relations Relations occur all the time in mathematics. For example, there are many relations between and % À Ÿ% % Á% l% Ÿ,, Á ß and l are examples of relations which might or might not hold between two

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics March 14, 2018 CS 361: Probability & Statistics Inference The prior From Bayes rule, we know that we can express our function of interest as Likelihood Prior Posterior The right hand side contains the

More information

Chances are good Modeling stochastic forces in ecology

Chances are good Modeling stochastic forces in ecology Chances are good Modeling stochastic forces in ecology Ecologists have a love-hate relationship with mathematical models The current popularity in population ecology of models so abstract that it is difficult

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 00 MODULE : Statistical Inference Time Allowed: Three Hours Candidates should answer FIVE questions. All questions carry equal marks. The

More information

Stat 643 Review of Probability Results (Cressie)

Stat 643 Review of Probability Results (Cressie) Stat 643 Review of Probability Results (Cressie) Probability Space: ( HTT,, ) H is the set of outcomes T is a 5-algebra; subsets of H T is a probability measure mapping from T onto [0,] Measurable Space:

More information

Primer on statistics:

Primer on statistics: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics MAS 713 Chapter 8 Previous lecture: 1 Bayesian Inference 2 Decision theory 3 Bayesian Vs. Frequentist 4 Loss functions 5 Conjugate priors Any questions? Mathematical Statistics

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

Department of Statistical Science FIRST YEAR EXAM - SPRING 2017

Department of Statistical Science FIRST YEAR EXAM - SPRING 2017 Department of Statistical Science Duke University FIRST YEAR EXAM - SPRING 017 Monday May 8th 017, 9:00 AM 1:00 PM NOTES: PLEASE READ CAREFULLY BEFORE BEGINNING EXAM! 1. Do not write solutions on the exam;

More information