SOLUTION FOR HOMEWORK 6, STAT 6331

Size: px
Start display at page:

Download "SOLUTION FOR HOMEWORK 6, STAT 6331"

Transcription

1 SOLUTION FOR HOMEWORK 6, STAT 633. Exerc.7.. It is given that X,...,X n is a sample from N(θ, σ ), and the Bayesian approach is used with Θ N(µ, τ ). The parameters σ, µ and τ are given. (a) Find the joinf pdf of X and Θ. We know that X N(θ, σ /n), thus f X,Θ (x, θ) (πσ /n) / e (x θ) /[σ /n] (πτ ) / e (θ µ) /[τ ] [4π σ τ /n] / exp( x xθ + θ σ /n θ µθ + µ τ ) () [4π σ τ /n] / exp( τ x θ[τ x + µσ /n] + (τ + σ /n)θ + µ σ /n (σ /n)τ ). () What can we say about this distribution? It is bivariate normal! Remember that Y (Y, Y ) T is bivariate normal with mean (ν, ν ) T and covariance matrix iff Σ d ρd d ρd d d, d > 0, d > 0, ρ, f Y,Y (y, y ) (π) / (det(σ)) / exp{ (/)(y ν) T Σ (y ν)} (3) ( ρ ) / πd d exp{ ( ρ ) [(y ν ) ρ (y ν )(y ν ) + (y ν ) ]}. (4) d d d Let us now directly match () and (4). Because these two densities are the same for all x and θ, we can equate coefficients (first for x and then for θ ) σ /n ( ρ )d τ σ /n ( ρ )d Now note that d Var(Θ) τ, and using this we get the simplified system σ /n ( ρ )d + τ σ /n ( ρ )τ d which yields { d n σ + τ ρ τ n σ +τ. Now I note that ν E(Θ) µ. Keeping this in mind, I compare () and (4) in terms of coefficients for x (note that there is no term x in ()), 0 ν ( ρ )d ρν ( ρ )d d.

2 This yields ν ρ µd d µ. We conclude that () is the bivariate normal with ν µ, ν µ, d n σ + τ, d τ, ρ τ n σ + τ. (5) (ii) Can we figure out (5) faster/simpler? Of course! We know that ν µ and d τ. Then using X N(θ, σ ), we can write, and Now, our next step is to calculate ν E( X) E{E( X Θ)} E(Θ) µ, d Var( X) E( X) (E X) E( X ) µ. E( X) E{E{( X) Θ}} E{[Var( X Θ) + (E{ X Θ}) } E{[n σ + Θ ]} n σ + E(Θ ) n σ + [Var(Θ) + (EΘ) ] n σ + τ + µ. (6) This allows us to conclude that d E( X ) µ n σ + τ. Finally, because E{( X µ)(θ µ)} E( XΘ) µ, we get ρ E( XΘ) µ d d E{E( XΘ Θ)} µ d d E(Θ ) µ d d I use (6) to continue We gor the same parameters! τ + µ µ τ(n σ + τ ) / τ (n σ + τ ) /. (b) We have shown this in Section (a), but let us do this directly to improve our calculus skills. Write f X(x) f X,Θ (x, θ)dθ [I use ()] π(τ σ /n) / [ exp{ τ x (σ /n)τ } (7) exp{ (τ + σ /n)[θ θ(τ x + µσ /n)(τ + σ /n) + (τ x + µσ /n) /(τ + σ /n) ] } (σ /n)τ (8) exp{ (τ x + µσ /n) (τ + σ /n) µ σ /n } ] dθ. (9) (σ /n)τ

3 Now see how I am finishing. The integral with respect to θ, according to (8), is a constant. Now, I expect to have f X(x) (πd ) / exp{ (x ν) /(d )} (πd ) / exp{ x d + xν d ν d }. Thus, let us look at terms with e cx and e cx. From (7) and (9) we get that exp{ x d } τ 4 (τ + σ /n) exp{ τ x } exp{ (σ /n)τ (τ + σ /n) }. This yields that d τ + σ /n (what was expected to see). Further, exp{ ν d } µ(σ /n)(τ + σ /n) µ exp{τ } exp{ (σ /n)τ τ + σ /n }. This implies the wished µ ν. (c) The posterior is π(θ x, σ, µ, τ ) f Θ, X(θ, x). f X( x) We did this in class so I just remind you the idea. You wish to see The only θ what we see in () are π(θ x, σ, µ, τ ) (πd ) / (θ ν) exp{ }. (0) d π(θ x, σ, µ, τ ) exp{ (τ + σ /n)θ θ[τ x + µσ /n] σ /n)τ }. () Thus we conclude vie a comparison of (0) and () that Solving the system yields Finally, note that ν E(Θ X) ˆθ Bayes. exp{ θ } exp{ τ +σ /n)θ } d (σ /n)τ exp{ θν } exp{ θ[τ x+µσ /n] }. d (σ /n)τ d (σ.n)τ τ +σ /n ν x τ τ +σ /n + µ σ /n τ +σ /n.. Exerc First of all, let is discuss the following. If Sn : (n n ) (X l X), 3 l x

4 then why is (n )S n/σ is Chi-squared with (n ) degrees of freedom? Recall that if ξ,...,ξ m are iid standard normal then χ m D m ξj j is the (centered) chi-squared RV with m degrees of freedom. A direct calculation shows that (n )S n (n )S n + [(n )/n](x n X n ), () where X n (n ) n l X l. Now we use the induction. For n we have (I see this directly since (/)(X X ) N(0, )) S (/)(X X ) χ. Next, if (m )Sm D χ m, we should prove that ms D m+ χ m. To verify this we use () and write, msm (m )Sm + (m/(m + ))(X m+ X m ). On the right side, the first term is χ m by the induction s assumption, and the second is independent of S m because X m+ is obviously independent and X m is independent by Basu s Theorem. Plus we can check directly that (m/(m+))(x m+ X) D χ. We conclude that ms m+ has chi-squared distribution with m degress of freedom. Remark: If you note that n l Xl n X + n l (X l X) where n / X N(0, σ ) and it is independent of n l (X l X), then this is another way to look (establish) the property. Now let us look at the problem in the text. Let t s. Then I use the chi-squared density F Sn σ (t σ ) Γ((n )/) (n )/[(n )σ t] (n )/ e (n )t/σ [(n )/σ]. Here I used the fact that if A RV Z has the pdf f Z (z) then (do you remember how to verify it?) f bz (y) b f Z (y/b). Now I use the given prior pfd π(σ ) and get (as usual in Bayesian analysis, we are interested in factors depending only on σ ) π(σ t) [(/σ ) (n )/ e (n )t/(σ) σ ][σ α e /(βσ ] (/σ ) (n )/+α+ exp{ σ [(n )t/ + (/β)]}. As you see, this is again the inverted gamma pdf, that is, π(σ t) is IG(a, b) with a (n )/ + α, b [(n )t/ + (/β)]. Because for the IG distribution the mean is /[(a )b], we get ˆσ Bayes E(σ t) n t + β n + α 4 n S n + β n + α.

5 3. Exers Let X,..., X n be a sample from Poisson(λ) and λ Gamma(α, β). (a). What is the posterior distribution of Λ? Write π(λ x) π(λ)f X λ(x λ) f X (x) λ β e λ/β e nλ λ n l x l λ (β+ n l xl) λ n exp{ β/(nβ + ) } Gamma(α + β x l, nβ + ). (b) Write (using the previous result and a formula for the mean of a Gamma RV, β ˆλ Bayes E(Λ X) (α + X l ) l nβ + + nβ + (αβ). Note that we again see the familiar structure in the Bayes estimate as an average because the prior mean and the MLE. For the variance we similarly get β Var(ˆλ Bayes X) (α + X l ) l (nβ + ). l 4. Exerc A sequence of RVs X,...,X n is observed. It is known that X i θ i N(θ i, σ ) and θ i N(µ, τ ). We begin with part (b) because we will use it in part (a). (b) Assertion: If X i θ i are independent with the pdf f(x θ i ), and Θ i are iid according to the pdf π(θ τ) then X,..., X n are iid. Let us prove this assertion. The joint pdf of (X, Θ) is Then f(x, θ τ) f(x l, θ l τ) π(θ l τ)f(x l θ). l l f(x τ)... π(θ l τ)f(x l θ l )dθ l l... { π(θ τ)f(x θ, τ)dx } π(θ l τ)f(x l θ l )dθ l. l Note that the integral in the curly brackets is f(x τ), and then repeating taking integrals (note that in the ofirst line there are n integrals) and taking into account that all obtaoined pdfs are the same we obtain that This shows that marginally all X s are iid. f(x τ) f(x l τ). l 5

6 (a) Using the result of part (b) we can consider f X µ,σ,τ (x) f(x θ, σ )π(θ µ, τ )dθ [πστ] e (x θ) /(σ ) e (θ µ) /(τ ) dθ. Now we do some calculations for the product of the two exponential functions in the last integral. Write e (x θ) /(σ ) e (θ µ) /(τ ) xτ (θ [ + µσ ]) σ exp{ +τ σ +τ (x µ) σ τ (σ + τ ) }. σ +τ Only the first term in the exponent involves θ. Let us denote it as T(θ). Then it is the kernel of a normal pdf so Combining the results we conclude that e T(θ) dθ (π) / στ (σ + τ ) /. f X µ,σ,τ (x) (πστ) (π) / στ (x µ) exp{ (σ + τ ) / (σ + τ ) }. Simplify it a bit and you can clearly see that this is the pdf of N(µ, σ + τ ). (As we knew before, sure.) 5. Exerc. 7.7 (a) First, let us write down the log-likelihood, l(β, τ (x,y)) [ βτ l + y l ln(β) + y l ln(τ l ) ln(y l!) τ l + x l ln(τ l ) ln(x l!)]. l Take partial derivatives with respect to the parameters and then equate them to zero. This allows us to find extremes. The equations are and τ l + β n y l 0, l l β + τl y l + τl x l 0, l,,..., n. We got a system of n + equations with n + parameters which we can solve. Write, β nl y l nl τ l, τ l x l + y l, l,,..., n. β + Taking the sum of τ l yields n l τ l (β+) n l (x l +y l ), and we can simplify the relation for β in the following way, β (β + ) n l y l nl (x l + y l ) 6

7 which in its turn yields β nl y l nl x l. Now you should check that the extreme point is the maximum. Here either graphic, or the following analysis can be used. Note that for any vector τ an extrema in β is the maximum (the second derivative is β n l y l, it is negative with the exception of the event with the exponentially vanishing probability when n l y l 0 which is a special case to consider! What do you think the MLE should be in this case?). The same is true for any τ l. We conclude that (β, τl, l,,..., n) are the MLE. (b) Let ˆβ n and ˆτ l are the limit points. Then they satisfy nl y l ˆβ ˆτ + n, ˆτ ˆτ + y l x l ˆβ +, ˆτ i x i + y i, i,...,n. ˆβ + Solve this system of equations and you get (7..6). Well, what is the simplest way to do that? You may begin with the second equation and get ˆτ y /ˆβ, substitute it into the first equation and get ˆβ n l y l n. Then you sum over i in the third equation, substitute ˆβ and l x l get n i ˆτ i n i x i, and then plug into the first equation; this yields the first equation in (7..6). The two others have being already obtained. (c) We already did this in part (b). 6. Exerc Note that if U f(u), W g(w) and Z Bernoulli(p), then X D ZU + ( Z)W. This is how you generate a RV with the mixture distribution! This remark can help you to understand the considered setting of missed data where observations of Z are missed. (a) Write for z {0, }, f X,Z (x,z p) f Z (z p)f X Z (x z, p) [pf(x l )] z l [( p)g(x l )] z l. l To see why this expression is correct, note that f X,Z (x, p) pf(x) and f X,Z (x, 0 p) ( p)g(x). (Note that there is a misprint in the text on page 36, first line). (b) Consider f Z X (z x, p) f Z,X(z, x p) f X (x p) [pf(x)]z [( p)g(x)] z z0 [pf(x)] z [( p)g(x)] [pf(x)]z [( p)g(x)] z, z ( p)g(x) + pf(x) which is the pmf of Bernoulli(pf(x)/[pf(x) + ( p)g(x)]). (c) Using results of part (b) we can write, E{ln(L(p X,Z)) p ˆp,X x} E{ [Z l ln(pf(x l ))+( Z l ) ln(( p)g(x+l))] p ˆp,X x} l 7

8 l [ ˆpf(x l ) ˆpf(x l ) + ( ˆp)g(x l ) ln(pf(x l)) + ( ˆp)g(x l ) ˆpf(x l ) + ( ˆp)g(x l ) ln(( p)g(x l)) ]. The last expression, as a function in p, has a maximum (check this!) and the p is the solution of the equation p n l ˆpf(x l ) ( p) ˆpf(x l ) + ( ˆp)g(x l ) l ( ˆp)g(x l ) ˆpf(x l ) + ( ˆp)g(x l ) 0. Solve it and get ˆp (r+) n ˆp (r) n l f(x l ) ˆp (r) f(x l ) + ( ˆp (r) )g(x l ). 7. Exerc. 7.3 The part (b) is well explained just apply Jensen s inequality. For part (a), the first assertion is plain just plug ˆθ (r) in (7..9). Then, using (7..9) we can write, log L(θ y) E{log(L(θ y,x)) θ (r),y y} E{log k(x θ,y) θ (r),y y}. Now, we choose θ (r+) which maximizes the expected complete-data log-likelihood, that is At the same time, thanks to the part (b), θ (r+) : argmax θ Ω E{log(L(θ Y,X)) θ (r),y}. E{log k(x θ (r),y) (θ (r),y} E{log k(x θ (r+),y) (θ (r),y}. This yields log(l(θ (r+) y)) log(l(θ (r) y)). Remark: Please pay attention to the Jensen inequality it is very useful in solving many problems. 8

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

SOLUTION FOR HOMEWORK 4, STAT 4352

SOLUTION FOR HOMEWORK 4, STAT 4352 SOLUTION FOR HOMEWORK 4, STAT 4352 Welcome to your fourth homework. Here we begin the study of confidence intervals, Errors, etc. Recall that X n := (X 1,...,X n ) denotes the vector of n observations.

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013

UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013 UNIVERSITY OF TORONTO SCARBOROUGH Department of Computer and Mathematical Sciences FINAL EXAMINATION, APRIL 2013 STAB57H3 Introduction to Statistics Duration: 3 hours Last Name: First Name: Student number:

More information

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

General Bayesian Inference I

General Bayesian Inference I General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

SOLUTION FOR HOMEWORK 8, STAT 4372

SOLUTION FOR HOMEWORK 8, STAT 4372 SOLUTION FOR HOMEWORK 8, STAT 4372 Welcome to your 8th homework. Here you have an opportunity to solve classical estimation problems which are the must to solve on the exam due to their simplicity. 1.

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

First Year Examination Department of Statistics, University of Florida

First Year Examination Department of Statistics, University of Florida First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your

More information

1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ).

1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Estimation February 3, 206 Debdeep Pati General problem Model: {P θ : θ Θ}. Observe X P θ, θ Θ unknown. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Examples: θ = (µ,

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Interval Estimation. Chapter 9

Interval Estimation. Chapter 9 Chapter 9 Interval Estimation 9.1 Introduction Definition 9.1.1 An interval estimate of a real-values parameter θ is any pair of functions, L(x 1,..., x n ) and U(x 1,..., x n ), of a sample that satisfy

More information

HPD Intervals / Regions

HPD Intervals / Regions HPD Intervals / Regions The HPD region will be an interval when the posterior is unimodal. If the posterior is multimodal, the HPD region might be a discontiguous set. Picture: The set {θ : θ (1.5, 3.9)

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

HT Introduction. P(X i = x i ) = e λ λ x i

HT Introduction. P(X i = x i ) = e λ λ x i MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework

More information

15 Discrete Distributions

15 Discrete Distributions Lecture Note 6 Special Distributions (Discrete and Continuous) MIT 4.30 Spring 006 Herman Bennett 5 Discrete Distributions We have already seen the binomial distribution and the uniform distribution. 5.

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

Joint p.d.f. and Independent Random Variables

Joint p.d.f. and Independent Random Variables 1 Joint p.d.f. and Independent Random Variables Let X and Y be two discrete r.v. s and let R be the corresponding space of X and Y. The joint p.d.f. of X = x and Y = y, denoted by f(x, y) = P(X = x, Y

More information

SOLUTION FOR HOMEWORK 12, STAT 4351

SOLUTION FOR HOMEWORK 12, STAT 4351 SOLUTION FOR HOMEWORK 2, STAT 435 Welcome to your 2th homework. It looks like this is the last one! As usual, try to find mistakes and get extra points! Now let us look at your problems.. Problem 7.22.

More information

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 25: Review. Statistics 104. April 23, Colin Rundel Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April

More information

STAT 830 Bayesian Estimation

STAT 830 Bayesian Estimation STAT 830 Bayesian Estimation Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Bayesian Estimation STAT 830 Fall 2011 1 / 23 Purposes of These

More information

MAS3301 Bayesian Statistics

MAS3301 Bayesian Statistics MAS3301 Bayesian Statistics M. Farrow School of Mathematics and Statistics Newcastle University Semester 2, 2008-9 1 11 Conjugate Priors IV: The Dirichlet distribution and multinomial observations 11.1

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

5.2 Expounding on the Admissibility of Shrinkage Estimators

5.2 Expounding on the Admissibility of Shrinkage Estimators STAT 383C: Statistical Modeling I Fall 2015 Lecture 5 September 15 Lecturer: Purnamrita Sarkar Scribe: Ryan O Donnell Disclaimer: These scribe notes have been slightly proofread and may have typos etc

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2 APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text. TEST #3 STA 5326 December 4, 214 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. (You will have access to

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

STAT215: Solutions for Homework 2

STAT215: Solutions for Homework 2 STAT25: Solutions for Homework 2 Due: Wednesday, Feb 4. (0 pt) Suppose we take one observation, X, from the discrete distribution, x 2 0 2 Pr(X x θ) ( θ)/4 θ/2 /2 (3 θ)/2 θ/4, 0 θ Find an unbiased estimator

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables. Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

Statistical Theory MT 2007 Problems 4: Solution sketches

Statistical Theory MT 2007 Problems 4: Solution sketches Statistical Theory MT 007 Problems 4: Solution sketches 1. Consider a 1-parameter exponential family model with density f(x θ) = f(x)g(θ)exp{cφ(θ)h(x)}, x X. Suppose that the prior distribution has the

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Chapters 13-15 Stat 477 - Loss Models Chapters 13-15 (Stat 477) Parameter Estimation Brian Hartman - BYU 1 / 23 Methods for parameter estimation Methods for parameter estimation Methods

More information

Linear Models A linear model is defined by the expression

Linear Models A linear model is defined by the expression Linear Models A linear model is defined by the expression x = F β + ɛ. where x = (x 1, x 2,..., x n ) is vector of size n usually known as the response vector. β = (β 1, β 2,..., β p ) is the transpose

More information

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments

STAT 135 Lab 3 Asymptotic MLE and the Method of Moments STAT 135 Lab 3 Asymptotic MLE and the Method of Moments Rebecca Barter February 9, 2015 Maximum likelihood estimation (a reminder) Maximum likelihood estimation Suppose that we have a sample, X 1, X 2,...,

More information

Math 152. Rumbos Fall Solutions to Assignment #12

Math 152. Rumbos Fall Solutions to Assignment #12 Math 52. umbos Fall 2009 Solutions to Assignment #2. Suppose that you observe n iid Bernoulli(p) random variables, denoted by X, X 2,..., X n. Find the LT rejection region for the test of H o : p p o versus

More information

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain

f(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain 0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher

More information

Bayesian data analysis in practice: Three simple examples

Bayesian data analysis in practice: Three simple examples Bayesian data analysis in practice: Three simple examples Martin P. Tingley Introduction These notes cover three examples I presented at Climatea on 5 October 0. Matlab code is available by request to

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

1 Probability Model. 1.1 Types of models to be discussed in the course

1 Probability Model. 1.1 Types of models to be discussed in the course Sufficiency January 11, 2016 Debdeep Pati 1 Probability Model Model: A family of distributions {P θ : θ Θ}. P θ (B) is the probability of the event B when the parameter takes the value θ. P θ is described

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Assume X P θ, θ Θ, with joint pdf (or pmf) f(x θ). Suppose we observe X = x. The Likelihood function is L(θ x) = f(x θ) as a function of θ (with the data x held fixed). The

More information

Answer Key for STAT 200B HW No. 7

Answer Key for STAT 200B HW No. 7 Answer Key for STAT 200B HW No. 7 May 5, 2007 Problem 2.2 p. 649 Assuming binomial 2-sample model ˆπ =.75, ˆπ 2 =.6. a ˆτ = ˆπ 2 ˆπ =.5. From Ex. 2.5a on page 644: ˆπ ˆπ + ˆπ 2 ˆπ 2.75.25.6.4 = + =.087;

More information

ECE531 Homework Assignment Number 6 Solution

ECE531 Homework Assignment Number 6 Solution ECE53 Homework Assignment Number 6 Solution Due by 8:5pm on Wednesday 3-Mar- Make sure your reasoning and work are clear to receive full credit for each problem.. 6 points. Suppose you have a scalar random

More information

1 Hypothesis Testing and Model Selection

1 Hypothesis Testing and Model Selection A Short Course on Bayesian Inference (based on An Introduction to Bayesian Analysis: Theory and Methods by Ghosh, Delampady and Samanta) Module 6: From Chapter 6 of GDS 1 Hypothesis Testing and Model Selection

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

1 Probability Model. 1.1 Types of models to be discussed in the course

1 Probability Model. 1.1 Types of models to be discussed in the course Sufficiency January 18, 016 Debdeep Pati 1 Probability Model Model: A family of distributions P θ : θ Θ}. P θ (B) is the probability of the event B when the parameter takes the value θ. P θ is described

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

Introduction to Bayesian Methods

Introduction to Bayesian Methods Introduction to Bayesian Methods Jessi Cisewski Department of Statistics Yale University Sagan Summer Workshop 2016 Our goal: introduction to Bayesian methods Likelihoods Priors: conjugate priors, non-informative

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Statistical Theory MT 2006 Problems 4: Solution sketches

Statistical Theory MT 2006 Problems 4: Solution sketches Statistical Theory MT 006 Problems 4: Solution sketches 1. Suppose that X has a Poisson distribution with unknown mean θ. Determine the conjugate prior, and associate posterior distribution, for θ. Determine

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

2017 Financial Mathematics Orientation - Statistics

2017 Financial Mathematics Orientation - Statistics 2017 Financial Mathematics Orientation - Statistics Written by Long Wang Edited by Joshua Agterberg August 21, 2018 Contents 1 Preliminaries 5 1.1 Samples and Population............................. 5

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

Foundations of Statistical Inference

Foundations of Statistical Inference Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 20 Lecture 6 : Bayesian Inference

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

1 Likelihood. 1.1 Likelihood function. Likelihood & Maximum Likelihood Estimators

1 Likelihood. 1.1 Likelihood function. Likelihood & Maximum Likelihood Estimators Likelihood & Maximum Likelihood Estimators February 26, 2018 Debdeep Pati 1 Likelihood Likelihood is surely one of the most important concepts in statistical theory. We have seen the role it plays in sufficiency,

More information

Lecture 26: Likelihood ratio tests

Lecture 26: Likelihood ratio tests Lecture 26: Likelihood ratio tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for

More information

Math 181B Homework 1 Solution

Math 181B Homework 1 Solution Math 181B Homework 1 Solution 1. Write down the likelihood: L(λ = n λ X i e λ X i! (a One-sided test: H 0 : λ = 1 vs H 1 : λ = 0.1 The likelihood ratio: where LR = L(1 L(0.1 = 1 X i e n 1 = λ n X i e nλ

More information

Principles of Statistics

Principles of Statistics Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 81 Paper 4, Section II 28K Let g : R R be an unknown function, twice continuously differentiable with g (x) M for

More information

Estimation, Inference, and Hypothesis Testing

Estimation, Inference, and Hypothesis Testing Chapter 2 Estimation, Inference, and Hypothesis Testing Note: The primary reference for these notes is Ch. 7 and 8 of Casella & Berger 2. This text may be challenging if new to this topic and Ch. 7 of

More information

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =

Spring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n = Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically

More information

Lecture 1: Introduction

Lecture 1: Introduction Principles of Statistics Part II - Michaelmas 208 Lecturer: Quentin Berthet Lecture : Introduction This course is concerned with presenting some of the mathematical principles of statistical theory. One

More information

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester Physics 403 Parameter Estimation, Correlations, and Error Bars Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Best Estimates and Reliability

More information

The linear model is the most fundamental of all serious statistical models encompassing:

The linear model is the most fundamental of all serious statistical models encompassing: Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate

More information

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1, Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem

More information

Stat 710: Mathematical Statistics Lecture 27

Stat 710: Mathematical Statistics Lecture 27 Stat 710: Mathematical Statistics Lecture 27 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 27 April 3, 2009 1 / 10 Lecture 27:

More information

One-parameter models

One-parameter models One-parameter models Patrick Breheny January 22 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/17 Introduction Binomial data is not the only example in which Bayesian solutions can be worked

More information

Bayesian Inference and Decision Theory

Bayesian Inference and Decision Theory Bayesian Inference and Decision Theory Instructor: Kathryn Blackmond Laskey Room 4 ENGR (703) 993-644 Office Hours: Thursday 4:00-6:00 PM, or by appointment Spring 08 Unit 5: The Normal Model Unit 5 -

More information

Chapter 8.8.1: A factorization theorem

Chapter 8.8.1: A factorization theorem LECTURE 14 Chapter 8.8.1: A factorization theorem The characterization of a sufficient statistic in terms of the conditional distribution of the data given the statistic can be difficult to work with.

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

A Bayesian Treatment of Linear Gaussian Regression

A Bayesian Treatment of Linear Gaussian Regression A Bayesian Treatment of Linear Gaussian Regression Frank Wood December 3, 2009 Bayesian Approach to Classical Linear Regression In classical linear regression we have the following model y β, σ 2, X N(Xβ,

More information

Final Examination Statistics 200C. T. Ferguson June 11, 2009

Final Examination Statistics 200C. T. Ferguson June 11, 2009 Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Part 4: Multi-parameter and normal models

Part 4: Multi-parameter and normal models Part 4: Multi-parameter and normal models 1 The normal model Perhaps the most useful (or utilized) probability model for data analysis is the normal distribution There are several reasons for this, e.g.,

More information

COS513 LECTURE 8 STATISTICAL CONCEPTS

COS513 LECTURE 8 STATISTICAL CONCEPTS COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Theory of Statistics.

Theory of Statistics. Theory of Statistics. Homework V February 5, 00. MT 8.7.c When σ is known, ˆµ = X is an unbiased estimator for µ. If you can show that its variance attains the Cramer-Rao lower bound, then no other unbiased

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Hybrid Censoring Scheme: An Introduction

Hybrid Censoring Scheme: An Introduction Department of Mathematics & Statistics Indian Institute of Technology Kanpur August 19, 2014 Outline 1 2 3 4 5 Outline 1 2 3 4 5 What is? Lifetime data analysis is used to analyze data in which the time

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Hypothesis Testing: The Generalized Likelihood Ratio Test

Hypothesis Testing: The Generalized Likelihood Ratio Test Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random

More information

More on nuisance parameters

More on nuisance parameters BS2 Statistical Inference, Lecture 3, Hilary Term 2009 January 30, 2009 Suppose that there is a minimal sufficient statistic T = t(x ) partitioned as T = (S, C) = (s(x ), c(x )) where: C1: the distribution

More information

Homework 1: Solution

Homework 1: Solution University of California, Santa Cruz Department of Applied Mathematics and Statistics Baskin School of Engineering Classical and Bayesian Inference - AMS 3 Homework : Solution. Suppose that the number

More information

Multivariate distributions

Multivariate distributions CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density

More information

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Lecture 17: Likelihood ratio and asymptotic tests

Lecture 17: Likelihood ratio and asymptotic tests Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f

More information