STAT215: Solutions for Homework 1
|
|
- Abner Lindsey
- 6 years ago
- Views:
Transcription
1 STAT25: Solutions for Homework Due: Wednesday, Jan 30. (0 pt) For X Be(α, β), Evaluate E[X a ( X) b ] for all real numbers a and b. For which a, b is it finite? (b) What is the MGF M log X (t) for the random variable Y : [log X]? [HINT: Integration by parts is not needed for, and no new calculations at all are needed for (b)!] E[X a ( X) b ] 0 x a ( x) b f(x)dx Γ(α + β) x (a+α) ( x) b+β dx Γ(α)Γ(β) 0 Γ(α + β) Γ(α + a)γ(β + b) Γ(α)Γ(β) Γ(α + β + a + b) (b) When a > α, b > β, it is finite. M Y (t) E(e ty ) e ty f(y)dy x t f(x)dx E(X t ) This is just a special case with a t, b 0. So M log X (t) Γ(α+β) Γ(α) t > α 2. (0 pt) Let X Ge(p). Find the moment generating function M X (t) : E [ e tx]. (b) Use this MGF to find the mean and variance of X. Γ(α+t) Γ(α+β+t),
2 M X (t) E[e tx ] e tx p( p) x x0 p, for t < log( p) ( p)et (b) The cumulant generating function c X (t) log(m X (t)) log p log( qe t ),so EX c X(0) p p V arx c X (0) (EX)2 Note: If you use f(y) pq y, then M Y (t) qe t ( qe t ) 2 t0 p p 2 pet, EY /p, V ary q/p 2. qe t 3. (0 pt) The double-exponential distribution has pdf f(x) λ 2 e λ x, for fixed λ > 0. Find the MGF M X (t) of a double exponential. For which t is it finite? (b) Let U, V iid Ex(), and find the MGF M Y (t) of Y : U V. distribution of Y? What is the (c) Find the mean and variance of a double exponential. If X,, X n are i.i.d. double exponentials, find the MGF M n (t) of the standardized mean, W n : V(X)/n X E[X]. (d) What is the limit of M n (t) as n? What distribution has this function for its MGF? M X (t) e tx λ 2 e λ x dx 0 λ e (t λ)x dx + λ e (t+λ)x dx λ 2 λ + t e(t λ)x 0 + λ + t e(t+λ)x 0 λ 2 if t < λ λ 2 t2 2
3 (b) M Y (t) E[e t(u V ) ] E[e tu ] E[e tv ] t + t t 2 which is the MGF of a double exponential so Y is a double exponential with λ. (c) For a double-exponential, EX M X (0) 0, V arx M X (0) 2/λ2. So M n (t) E(e twn ) M X (t/ nv (x)) M Xn (t/ nv (x)) M X (t/ nv (x)) n ( t 2 /2n )n (d) As n, M n (t) e t2 /2 et2 /2,which is the MGF of a No(0, ) 4. (0 pt) Bickel & Doksum pg 7, problem.2. π(θ θ x 0) Similarly, π(θ θ 2 x 0) π(θ θ x ) π(θ θ )π(x 0 θ θ ) π(θ θ )π(x 0 θ θ ) + π(θ θ 2 )π(x 0 θ θ 2 ) π(θ θ 2 x ) We might summarize this posterior frequency function in the following way: θ, x 0 2 θ 3 θ
4 (b) Note that π(x, x 2,..., x n θ θ ) n i (0.2)x i (0.8) x i (0.2) Σx i (0.8) n Σx i. π(θ θ x, x 2,..., x n ) where S n Σx i. Similarly, π(θ θ x, x 2,..., x n ) (c) Similar to (b). We have: π(θ θ )π(x, x 2,..., x n θ θ ) π(θ θ )π(x, x 2,..., x n θ θ ) + π(θ θ 2 )π(x, x 2,..., x n θ 2 (0.2)Σx i (0.8) n Σx i 2 (0.2)Σx i (0.8) n Σx i + 2 (0.6)Σx i (0.4) n Σx i 3 Sn 2 n Sn 4 n Sn + 3 Sn 2 n Sn 2 n 2 n + 6 Sn 6Sn. 2 n +6 Sn π(θ θ x, x 2,..., x n ) and π(θ θ 2 x, x 2,..., x n ) π(θ θ )π(x, x 2,..., x n θ θ ) π(θ θ )π(x, x 2,..., x n θ θ ) + π(θ θ 2 )π(x, x 2,..., x n θ (0.25) (0.2) Σx i (0.8) n Σx i (0.25) (0.2) Σx i (0.8) n Σx i + (0.75) (0.6) Σx i(0.4) n Σx i 2 n 2 n + 6 Sn (0.75) (0.6) Σx i (0.4) n Σx i (0.25) (0.2) Σx i (0.8) n Σx i + (0.75) (0.6) Σx i(0.4) n Σx i 6 Sn 2 n Sn (d) When n 2, then the observation gives Σx i. The posterior distribution of θ in that case is π(θ θ Σx i ) , and π(θ θ Σx i ) 0.6 for the original prior. For the prior π given in part (c), the posterior distribution of θ is π(θ θ Σx i ) , and π(θ θ Σx i ) When n 00, then the observation gives Σx i 50. In that case, the original prior gives the following posterior distribution of θ: π(θ θ Σx i 50) , and π(θ θ Σx i ) for the original prior. For the prior π given in part (c), the posterior distribution of θ is π(θ θ Σx i ) , and π(θ θ 2 Σx i ). 4
5 (e) Given that Σx i k, we have, under the initial prior π, π(θ θ Σx i k) and π(θ θ 2 Σx i k) 2 (0.2)k (0.8) n k, 2 (0.2)k (0.8) n k + 2 (0.4)k (0.6) n k 2 (0.4)k (0.6) n k. 2 (0.2)k (0.8) n k + 2 (0.4)k (0.6) n k Since these denominators are the same, we may find the value of θ having the higher posterior probability by comparing numerators only: π(θ θ Σx i k) > π(θ θ 2 Σx i k) 2 (0.2)k (0.8) n k > 2 (0.6)k (0.4) n k 0.2 k 0.8 n k > k log 3 + (n k) log 2 > 0 k/n < log (after a little algebra). log 6 This shows that under the original prior π, θ θ has a higher posterior probability given Σx i than does θ θ 2, so long as k < 0.39n. For the prior π, similar algebra shows that θ θ has a higher posterior probability given Σx i than does θ θ 2, so long as k < 0.39n 0.6, approximately. Under the original prior, for n 2, ˆθ θ if and only if k 0. For n 00, ˆθ θ if and only if k {0,,..., 38}. Under the prior π, for n 2, ˆθ θ if and only if k 0, as before. For n 00, ˆθ θ if and only if k {0,,..., 38}, exactly the same as under the other prior. (f) The only time the two ˆθ s will disagree is if k (0.39n 0.6, 0.39n) k 0.39n (approximately). For example, if n 000, then the two different priors would yield different estimates of ˆθ only if k happened to be 390. As n goes to infinity, the probability of x taking any one particular value k goes to zero, so the region on which the two ˆθ s disagree has a probability going to zero. 5. (0 pt) Bickel & Doksum pg 74, problem.2.4 The solution to this problem is worked out in Gelman, Carlin, Stern, and Rubin ( Bayesian Data Analysis, aka The Red Book ), pp It involves more than one rather unpretty completing of a square. The result is as follows: Posterior predictive distribution of x n+ : x n+ x, x 2,..., x n No(µ n, σ0 2 + τ n 2 ), where µ n τ 2 n ( ««τ 0 2 n θ 0 + σ 0 2 x «τ 0 2 n + σ 0 2 τ0 2 + n σ 2 0 ). «, and Here, x n Σn ix i. The predictive distribution (marginal) of x n+ is just a special case of the predictive distribution when n 0, so we have x n+ No(θ 0, σ τ 2 0 ). 5
6 (b) As n goes to infinity we note that the predictive distribution of x n+ never changes. This is to be expected, since by not taking into account the data we re gaining no information. Note that the mean of x n+ is always the prior mean on θ, θ 0, and the variance of x n+ comes from two sources: the uncertainty in the observation (σ0 2) and the uncertainty in the prior estimate of θ (τ 0 2). However, as n goes to infinity the posterior predictive distribution of x n+ given x, x 2,..., x n behaves differently. The posterior mean, which is a weighted average of the prior mean and the sample mean, gets a heavier and heavier weight on the sample mean. And the variance has less and less of its uncertainty coming from the uncertainty in θ, so that nearly all the uncertainty comes from the uncertainty in the observation. In the limit, x n+ x, x 2,..., x n No( x, σ0) (0 pt) Bickel & Doksum pg 84, problem.5.3 The factorization theorem will be useful to determine a sufficient statistic for each example. p(x, x 2,..., x n θ) ( θx θ ) θ n x θ i θ n exp ((θ )Σ log x i ) (b) By the factorization theorem, Σ log x i is a sufficient statistic for θ. Note that taking the log was not necessary. Another sufficient statistic would be x i. The latter is more cumbersome computationally, so it is common in a situation like this to take the logs. p(x, x 2,..., x n θ) ( θax a exp( θx a ) ) exp (n log θ + n log a + (a )Σ log x i θσ (x a i )) (c) A sufficient statistic for θ is Σx a i. p(x, x 2,..., x n θ) θa θ A sufficient statistic for θ is Σ log x i. x θ+ i exp (Σ (log θ + θ log a (θ + ) log x i )) exp (n log θ + nθ log a (θ + )Σ log x i ) 6
7 7. (0 pt) Bickel & Doksum pg 85, problem.5.4 Suppose a bijection f exists between T and T 2. Then T (x) T (y) f(t (x)) f(t (y)) T 2 (x) T 2 (y). And the reverse is also true because f, being a bijection, has an inverse. (b) Let f(x) log(x). Then f( x i ) log( x i ) Σ log x i. f is a bijection, so these are equivalent statistics. (c) These are not equivalent statistics. Here s a counterexample: If x and x 2 4 then Σx i 5. And if y 2 and y 2 3, then we also have Σy i 5. But log + log 4 log 2 + log 3. So the information contained in these two statistics is not identical. (d) Σ(x i x) 2 Σ(x 2 i 2x i x + x 2 ) Σ(x 2 i ) 2 xσx i + n x 2 ( Σ(x 2 i ) 2 n Σx i ) (Σx i ) + n Σ(x 2 i ) 2 n (Σx i) 2 + n (Σx i) 2 Σ(x 2 i ) n (Σx i) 2 ( ) 2 n Σx i We can uniquely express the pair [Σx i, Σ(x i x) 2 ] in terms of the pair [Σx i, Σx 2 i ]. A little algebra shows that the reverse is also uniquely possible, with Σx 2 i Σ(x i x) 2 + n (Σx i) 2. Thus, a bijection between these two statistics exist, and they are therefore equivalent. (e) These are not equivalent statistics. Here s an example, let s begin with x 3, x 2 0, x 3 0, then we have T (3, 27)andT 2 (3, 6); now another set 3,,-, then again T (3, 27) but T 2 (3, 0). More generally, to obtain a counterexample, let s begin with x, x 2 2, and x 3 3. We have Σx i 6 and Σx 3 i 36. That system has two equations but three variables, and therefore more than one distinct solution to construct a counter example. 8. (0 pt) Bickel & Doksum pg 86, problem.5.4 π(θ)π(x θ) By definition, the posterior distribution of θ given X is π(θ X) R θ π(θ)π(x θ)dθ If the posterior distribution depends upon X only through T (X), then the likelihood function π(x θ) must also depend upon X only through T (X), for it is only in the likelihood function that X appears in the expression for the posterior distribution of θ given X. And if the likelihood function π(x θ) depends upon X only through T (X), then all X having a common T (X) belong to the same information class, and their distribution given T (X) will not depend upon θ. Thus, in that case, T (X) is a sufficient statistic. 7
8 (b) If T (X) is a sufficient statistic for θ, then by the factorization theorem, the distribution of X given θ can be factored thus: π(x θ) g(t (X), θ)h(x). But then the posterior distribution of θ given X can be expressed in a similar form, π(θ)g(t (X),θ)h(X) by substitution: π(θ X). Thus, the posterior distribution Rθ π(θ)g(t (X),θ)h(X)dθ of θ given X depends upon X only through T (X). 9. (0 pt) Bickel & Doksum pg 87, problem.6.2 (b) (c) η(θ) θ B(θ) n log θ T (x) n i log x i h(x) I (0,) (x :n ) f(x θ) Π n i θxθ i I (0,) (x :n ) exp(n log θ + (θ ) n log x i )I (0,) (x :n ) f(x θ) Π n iθaxi a exp( θx a i )I (0, ) (x :n ), (a is a constant ) n exp(n log θ θ x a i )an Π n i xa i I (0, ) (x :n ) η(θ) θ B(θ) n log θ T (x) n i xa i h(x) a n Π n ix a i I (0, ) (x :n ) i i f(x theta) Π n θa θ i I (a, ) (x :n ) x θ+ i exp(n(log θ + θ log a) (θ + ) n log x i )I (a, ) (x :n ) i η(θ) (θ + ) B(θ) n log θ nθ log a T (x) n i log x i h(x) I (a, ) (x :n ) 8
9 0. (0 pt) Bickel & Doksum pg 87, problem.6.4 The family of uniform distributions is not an exponential family. If X has pdf π(x θ) θ [0,θ](x), then π(x θ) takes the value 0 over a part of its domain that depends upon θ. But if a function f is in an exponential family expressed as f(x θ) h(x)e T (x)η(θ) B(θ), then the only way the function can take the value 0 is if h(x) 0, which cannot depend upon θ. (b) Rewriting this in a different form shows that it is a triangular function: π(x θ) 2x over x (0, θ) and π(x θ) 0 elsewhere. This is not an exponential family, θ 2 for the same reason as the uniform distribution in. The locations where π(x θ) 0 depends upon θ, but in an exponential family, such a set of locations could only depend upon x. (c) Once again, this function has probability mass equal to zero at lots of locations, but they depend upon θ. So it cannot be in any exponential family. (d) WHOOPS! Sorry we didn t catch this earlier this is kind of a trick question. In.6.3 Bickel & Doksum define curved exponential families, where (as in this example) the parameter space Θ has lower dimension (l ) than the natural parameter η(θ) or natural sufficient statistic T (X) (both k 2). The authors would say this is not an exponential family, since l < k. I feel the distinction is subtle and misleading so we will treat either yes (with k 2 and η, T as below) or no (with the explanation that l < k) as correct answers. If you answered no and did not receive credit for this problem, please bring your homework to either Zhenglei or to me for us to correct our grading error. Meanwhile, the yes answer we ll accept is: Yes, this is an exponential family. ( π(x θ) exp ) (x θ)2 2πθ 2θ2 ( exp ) log(2π) log θ (x θ)2 2 2θ2 ( exp ) log(2π) log θ 2 2θ 2 (x2 2xθ + θ 2 ) 2 x2 exp ( log(2π) log θ + ( 2θ + x 2 θ ) 2 ) ( exp 2 log(2π) ) exp ( x2 2 2θ + x ) 2 θ log θ ( exp 2 log(2π) ) ( exp [x, x 2 ] [ 2 θ, ) log(θ) 2θ 2 ] Here we have η(θ) [ θ, 2θ 2 ], T (x) [x, x 2 ], B(θ) log θ, and h(x) exp ( 2 log(2π) 2). 9
10 (e) π(x θ) 2(x+θ) +2θ exp (log 2 + log(x + θ) log( + 2θ)) The log(x + θ) part of this expression cannot be factored into a T (x) part and an η(θ) part. So this family is not an exponential family. (f) π(x θ) ( n ) x θ x ( θ) n x ( θ) ( ) n n ( θ) n ) x ( θ x ( θ) n θ ) ( ( ) ( θ) n exp log ( θ) n ( n x ) + x(log θ log( θ)) This is an exponential family. η(θ) log θ log( θ), T (x) x, B(θ) log ( ( θ) n ( θ) n ), and h(x) ( n x). 0
STAT215: Solutions for Homework 2
STAT25: Solutions for Homework 2 Due: Wednesday, Feb 4. (0 pt) Suppose we take one observation, X, from the discrete distribution, x 2 0 2 Pr(X x θ) ( θ)/4 θ/2 /2 (3 θ)/2 θ/4, 0 θ Find an unbiased estimator
More informationMATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours
MATH2750 This question paper consists of 8 printed pages, each of which is identified by the reference MATH275. All calculators must carry an approval sticker issued by the School of Mathematics. c UNIVERSITY
More informationP (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n
JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are
More informationMarch 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS
March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS Abstract. We will introduce a class of distributions that will contain many of the discrete and continuous we are familiar with. This class will help
More informationFinal Examination. STA 215: Statistical Inference. Saturday, 2001 May 5, 9:00am 12:00 noon
Final Examination Saturday, 2001 May 5, 9:00am 12:00 noon This is an open-book examination, but you may not share materials. A normal distribution table, a PMF/PDF handout, and a blank worksheet are attached
More informationGeneral Bayesian Inference I
General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for
More information1. Fisher Information
1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score
More informationAPPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2
APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so
More informationFinal Examination a. STA 532: Statistical Inference. Wednesday, 2015 Apr 29, 7:00 10:00pm. Thisisaclosed bookexam books&phonesonthefloor.
Final Examination a STA 532: Statistical Inference Wednesday, 2015 Apr 29, 7:00 10:00pm Thisisaclosed bookexam books&phonesonthefloor Youmayuseacalculatorandtwo pagesofyourownnotes Do not share calculators
More informationStat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1
Stat 366 A1 Fall 6) Midterm Solutions October 3) page 1 1. The opening prices per share Y 1 and Y measured in dollars) of two similar stocks are independent random variables, each with a density function
More informationLecture 5: Moment generating functions
Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf
More informationParameter Estimation
Parameter Estimation Chapters 13-15 Stat 477 - Loss Models Chapters 13-15 (Stat 477) Parameter Estimation Brian Hartman - BYU 1 / 23 Methods for parameter estimation Methods for parameter estimation Methods
More informationMidterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm
Midterm Examination STA 215: Statistical Inference Due Wednesday, 2006 Mar 8, 1:15 pm This is an open-book take-home examination. You may work on it during any consecutive 24-hour period you like; please
More information1. Point Estimators, Review
AMS571 Prof. Wei Zhu 1. Point Estimators, Review Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators.
More informationMIT Spring 2016
Dr. Kempthorne Spring 2016 1 Outline Building 1 Building 2 Definition Building Let X be a random variable/vector with sample space X R q and probability model P θ. The class of probability models P = {P
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationMidterm Examination. STA 205: Probability and Measure Theory. Wednesday, 2009 Mar 18, 2:50-4:05 pm
Midterm Examination STA 205: Probability and Measure Theory Wednesday, 2009 Mar 18, 2:50-4:05 pm This is a closed-book examination. You may use a single sheet of prepared notes, if you wish, but you may
More informationStatistics Ph.D. Qualifying Exam: Part I October 18, 2003
Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationEcon 508B: Lecture 5
Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline
More informationStatistics STAT:5100 (22S:193), Fall Sample Final Exam B
Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2
More informationMidterm Examination. STA 205: Probability and Measure Theory. Thursday, 2010 Oct 21, 11:40-12:55 pm
Midterm Examination STA 205: Probability and Measure Theory Thursday, 2010 Oct 21, 11:40-12:55 pm This is a closed-book examination. You may use a single sheet of prepared notes, if you wish, but you may
More informationThe binomial model. Assume a uniform prior distribution on p(θ). Write the pdf for this distribution.
The binomial model Example. After suspicious performance in the weekly soccer match, 37 mathematical sciences students, staff, and faculty were tested for the use of performance enhancing analytics. Let
More informationPatterns of Scalable Bayesian Inference Background (Session 1)
Patterns of Scalable Bayesian Inference Background (Session 1) Jerónimo Arenas-García Universidad Carlos III de Madrid jeronimo.arenas@gmail.com June 14, 2017 1 / 15 Motivation. Bayesian Learning principles
More informationFinal Examination. STA 711: Probability & Measure Theory. Saturday, 2017 Dec 16, 7:00 10:00 pm
Final Examination STA 711: Probability & Measure Theory Saturday, 2017 Dec 16, 7:00 10:00 pm This is a closed-book exam. You may use a sheet of prepared notes, if you wish, but you may not share materials.
More informationPARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.
PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.. Beta Distribution We ll start by learning about the Beta distribution, since we end up using
More informationBell-shaped curves, variance
November 7, 2017 Pop-in lunch on Wednesday Pop-in lunch tomorrow, November 8, at high noon. Please join our group at the Faculty Club for lunch. Means If X is a random variable with PDF equal to f (x),
More informationTest Problems for Probability Theory ,
1 Test Problems for Probability Theory 01-06-16, 010-1-14 1. Write down the following probability density functions and compute their moment generating functions. (a) Binomial distribution with mean 30
More informationSampling Distributions
In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,
More informationFIRST YEAR EXAM Monday May 10, 2010; 9:00 12:00am
FIRST YEAR EXAM Monday May 10, 2010; 9:00 12:00am NOTES: PLEASE READ CAREFULLY BEFORE BEGINNING EXAM! 1. Do not write solutions on the exam; please write your solutions on the paper provided. 2. Put the
More informationNotes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed
18.466 Notes, March 4, 2013, R. Dudley Maximum likelihood estimation: actual or supposed 1. MLEs in exponential families Let f(x,θ) for x X and θ Θ be a likelihood function, that is, for present purposes,
More informationINTRODUCTION TO BAYESIAN METHODS II
INTRODUCTION TO BAYESIAN METHODS II Abstract. We will revisit point estimation and hypothesis testing from the Bayesian perspective.. Bayes estimators Let X = (X,..., X n ) be a random sample from the
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationStat 5102 Final Exam May 14, 2015
Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions
More information5.2 Fisher information and the Cramer-Rao bound
Stat 200: Introduction to Statistical Inference Autumn 208/9 Lecture 5: Maximum likelihood theory Lecturer: Art B. Owen October 9 Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationLecture 25: Review. Statistics 104. April 23, Colin Rundel
Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April
More informationMaster s Written Examination
Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 20 Lecture 6 : Bayesian Inference
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationQualifying Exam in Probability and Statistics.
Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationSTA 2201/442 Assignment 2
STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationStatistics GIDP Ph.D. Qualifying Exam Theory Jan 11, 2016, 9:00am-1:00pm
Statistics GIDP Ph.D. Qualifying Exam Theory Jan, 06, 9:00am-:00pm Instructions: Provide answers on the supplied pads of paper; write on only one side of each sheet. Complete exactly 5 of the 6 problems.
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationSampling Distributions
Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of
More informationStat410 Probability and Statistics II (F16)
Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two
More informationContinuous random variables
Continuous random variables Can take on an uncountably infinite number of values Any value within an interval over which the variable is definied has some probability of occuring This is different from
More informationPartial Solutions for h4/2014s: Sampling Distributions
27 Partial Solutions for h4/24s: Sampling Distributions ( Let X and X 2 be two independent random variables, each with the same probability distribution given as follows. f(x 2 e x/2, x (a Compute the
More informationt x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.
Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae
More informationChapter 8.8.1: A factorization theorem
LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationMATH2715: Statistical Methods
MATH2715: Statistical Methods Exercises V (based on lectures 9-10, work week 6, hand in lecture Mon 7 Nov) ALL questions count towards the continuous assessment for this module. Q1. If X gamma(α,λ), write
More information1.1 Review of Probability Theory
1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationTheory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk
Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments
More informationEstimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator
Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were
More informationLecture 2. (See Exercise 7.22, 7.23, 7.24 in Casella & Berger)
8 HENRIK HULT Lecture 2 3. Some common distributions in classical and Bayesian statistics 3.1. Conjugate prior distributions. In the Bayesian setting it is important to compute posterior distributions.
More informationFirst Year Examination Department of Statistics, University of Florida
First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your
More informationProbability Theory and Statistics. Peter Jochumzen
Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................
More informationLet X and Y denote two random variables. The joint distribution of these random
EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.
More informationUSEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*
USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate
More informationMath 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14
Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationMath 407: Probability Theory 5/10/ Final exam (11am - 1pm)
Math 407: Probability Theory 5/10/2013 - Final exam (11am - 1pm) Name: USC ID: Signature: 1. Write your name and ID number in the spaces above. 2. Show all your work and circle your final answer. Simplify
More informationSpring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =
Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically
More informationSTAT 830 Bayesian Estimation
STAT 830 Bayesian Estimation Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Bayesian Estimation STAT 830 Fall 2011 1 / 23 Purposes of These
More informationMathematics Ph.D. Qualifying Examination Stat Probability, January 2018
Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from
More informationFinal Examination Statistics 200C. T. Ferguson June 11, 2009
Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean
More information9 Bayesian inference. 9.1 Subjective probability
9 Bayesian inference 1702-1761 9.1 Subjective probability This is probability regarded as degree of belief. A subjective probability of an event A is assessed as p if you are prepared to stake pm to win
More informationFinal Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.
1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically
More informationProbability Midterm Exam 2:15-3:30 pm Thursday, 21 October 1999
Name: 2:15-3:30 pm Thursday, 21 October 1999 You may use a calculator and your own notes but may not consult your books or neighbors. Please show your work for partial credit, and circle your answers.
More informationMathematical statistics
October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:
More informationThis exam contains 13 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.
Probability and Statistics FS 2017 Session Exam 22.08.2017 Time Limit: 180 Minutes Name: Student ID: This exam contains 13 pages (including this cover page) and 10 questions. A Formulae sheet is provided
More informationSTAT 514 Solutions to Assignment #6
STAT 514 Solutions to Assignment #6 Question 1: Suppose that X 1,..., X n are a simple random sample from a Weibull distribution with density function f θ x) = θcx c 1 exp{ θx c }I{x > 0} for some fixed
More informationUNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013
UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013 STAB57H3 Introduction to Statistics Duration: 3 hours Last Name: First Name: Student number:
More informationModern Methods of Statistical Learning sf2935 Auxiliary material: Exponential Family of Distributions Timo Koski. Second Quarter 2016
Auxiliary material: Exponential Family of Distributions Timo Koski Second Quarter 2016 Exponential Families The family of distributions with densities (w.r.t. to a σ-finite measure µ) on X defined by R(θ)
More informationActuarial Science Exam 1/P
Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,
More informationMIT Spring 2016
Exponential Families II MIT 18.655 Dr. Kempthorne Spring 2016 1 Outline Exponential Families II 1 Exponential Families II 2 : Expectation and Variance U (k 1) and V (l 1) are random vectors If A (m k),
More informationContinuous Random Variables
Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction
More informationSOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM
SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question A1 a) The marginal cdfs of F X,Y (x, y) = [1 + exp( x) + exp( y) + (1 α) exp( x y)] 1 are F X (x) = F X,Y (x, ) = [1
More informationSUFFICIENT STATISTICS
SUFFICIENT STATISTICS. Introduction Let X (X,..., X n ) be a random sample from f θ, where θ Θ is unknown. We are interested using X to estimate θ. In the simple case where X i Bern(p), we found that the
More informationMODEL COMPARISON CHRISTOPHER A. SIMS PRINCETON UNIVERSITY
ECO 513 Fall 2008 MODEL COMPARISON CHRISTOPHER A. SIMS PRINCETON UNIVERSITY SIMS@PRINCETON.EDU 1. MODEL COMPARISON AS ESTIMATING A DISCRETE PARAMETER Data Y, models 1 and 2, parameter vectors θ 1, θ 2.
More informationMasters Comprehensive Examination Department of Statistics, University of Florida
Masters Comprehensive Examination Department of Statistics, University of Florida May 6, 003, 8:00 am - :00 noon Instructions: You have four hours to answer questions in this examination You must show
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationModel comparison. Christopher A. Sims Princeton University October 18, 2016
ECO 513 Fall 2008 Model comparison Christopher A. Sims Princeton University sims@princeton.edu October 18, 2016 c 2016 by Christopher A. Sims. This document may be reproduced for educational and research
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationJoint p.d.f. and Independent Random Variables
1 Joint p.d.f. and Independent Random Variables Let X and Y be two discrete r.v. s and let R be the corresponding space of X and Y. The joint p.d.f. of X = x and Y = y, denoted by f(x, y) = P(X = x, Y
More informationECE531 Homework Assignment Number 6 Solution
ECE53 Homework Assignment Number 6 Solution Due by 8:5pm on Wednesday 3-Mar- Make sure your reasoning and work are clear to receive full credit for each problem.. 6 points. Suppose you have a scalar random
More information[Chapter 6. Functions of Random Variables]
[Chapter 6. Functions of Random Variables] 6.1 Introduction 6.2 Finding the probability distribution of a function of random variables 6.3 The method of distribution functions 6.5 The method of Moment-generating
More informationContinuous distributions
CHAPTER 7 Continuous distributions 7.. Introduction A r.v. X is said to have a continuous distribution if there exists a nonnegative function f such that P(a X b) = ˆ b a f(x)dx for every a and b. distribution.)
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random
More informationMidterm Examination. Mth 136 = Sta 114. Wednesday, 2000 March 8, 2:20 3:35 pm
Midterm Examination Mth 136 = Sta 114 Wednesday, 2000 March 8, 2:20 3:35 pm This is a closed-book examination so please do not refer to your notes, the text, or to any other books. You may use a two-sided
More informationSTA 4321/5325 Solution to Homework 5 March 3, 2017
STA 4/55 Solution to Homework 5 March, 7. Suppose X is a RV with E(X and V (X 4. Find E(X +. By the formula, V (X E(X E (X E(X V (X + E (X. Therefore, in the current setting, E(X V (X + E (X 4 + 4 8. Therefore,
More informationBMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs
Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem
More information