Lecture 17: Likelihood ratio and asymptotic tests

Size: px
Start display at page:

Download "Lecture 17: Likelihood ratio and asymptotic tests"

Transcription

1 Lecture 17: Likelihood ratio and asymptotic tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for some c 0 > 0. The following definition is a natural extension of this idea. Definition 6.2 Let l(θ) = f θ (X) be the likelihood function. For testing H 0 : θ Θ 0 versus H 1 : θ Θ 1, a likelihood ratio (LR) test is any test that rejects H 0 if and only if λ(x) < c, where c [0,1] and λ(x) is the likelihood ratio defined by / λ(x) = sup l(θ) θ Θ 0 supl(θ). θ Θ UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

2 Discussions If λ(x) is well defined, then λ(x) 1. The rationale behind LR tests is that when H 0 is true, λ(x) tends to be close to 1, whereas when H 1 is true, λ(x) tends to be away from 1. If there is a sufficient statistic, then λ(x) depends only on the sufficient statistic. LR tests are as widely applicable as MLE s in 4.4 and, in fact, they are closely related to MLE s. If θ is an MLE of θ and θ 0 is an MLE of θ subject to θ Θ 0 (i.e., Θ 0 is treated as the parameter space), then λ(x) = l( θ 0 ) / l( θ). For a given α (0,1), if there exists a c α [0,1] such that sup P θ (λ(x) < c α ) = α, θ Θ 0 then an LR test of size α can be obtained. Even when the c.d.f. of λ(x) is continuous or randomized LR tests are introduced, it is still possible that such a c α does not exist. UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

3 Optimality When a UMP or UMPU test exists, an LR test is often the same as this optimal test. Proposition 6.5 Suppose that X has a p.d.f. in a one-parameter exponential family: f θ (x) = exp{η(θ)y (x) ξ (θ)}h(x) w.r.t. a σ-finite measure ν, where η is a strictly increasing and differentaible function of θ. (i) For testing H 0 : θ θ 0 versus H 1 : θ > θ 0, there is an LR test whose rejection region is the same as that of the UMP test T given in Theorem 6.2. (ii) For testing H 0 : θ θ 1 or θ θ 2 versus H 1 : θ 1 < θ < θ 2, there is an LR test whose rejection region is the same as that of the UMP test T given in Theorem 6.3. (iii) For testing the other two-sided hypotheses, there is an LR test whose rejection region is equivalent to Y (X) < c 1 or Y (X) > c 2 for some constants c 1 and c 2. UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

4 Proof We prove (i) only. Let θ be the MLE of θ. Note that l(θ) is increasing when θ θ and decreasing when θ > θ. Thus, { 1 θ θ0 λ(x) = θ > θ 0. l(θ 0 ) l( θ) Then λ(x) < c is the same as θ > θ 0 and l(θ 0 )/l( θ) < c. From the property of exponential families, θ is a solution of the likelihood equation logl(θ) θ = η (θ)y (X) ξ (θ) = 0 and ψ(θ) = ξ (θ)/η (θ) has a positive derivative ψ (θ). Since η ( θ)y ξ ( θ) = 0, θ is an increasing function of Y and d θ dy > 0. Consequently, for any θ 0 Θ, UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

5 d [ ] logl( θ) logl(θ 0 ) dy = d [ ] η( θ)y ξ ( θ) η(θ 0 )Y + ξ (θ 0 ) dy = d θ dy η ( θ)y + η( θ) d θ dy ξ ( θ) η(θ 0 ) = d θ dy [η ( θ)y ξ ( θ)] + η( θ) η(θ 0 ) = η( θ) η(θ 0 ), which is positive (or negative) if θ > θ 0 (or θ < θ 0 ), i.e., logl( θ) logl(θ 0 ) is strictly increasing in Y when θ > θ 0 and strictly decreasing in Y when θ < θ 0. Hence, for any d R, θ > θ 0 and l(θ 0 )/l( θ) < c is equivalent to Y > d for some c (0,1). Example 6.20 Consider the testing problem H 0 : θ = θ 0 versus H 1 : θ θ 0 based on i.i.d. X 1,...,X n from the uniform distribution U(0,θ). We now show that the UMP test with rejection region X (n) > θ 0 or X (n) θ 0 α 1/n given in Exercise 19(c) is an LR test. UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

6 Note that l(θ) = θ n I (X(n), )(θ). Hence { (X(n) /θ λ(x) = 0 ) n X (n) θ 0 0 X (n) > θ 0 and λ(x) < c is equivalent to X (n) > θ 0 or X (n) /θ 0 < c 1/n. Taking c = α ensures that the LR test has size α. Example 6.21 Consider normal linear model X = N n (Z β,σ 2 I n ) and the hypotheses H 0 : Lβ = 0 versus H 1 : Lβ 0, where L is an s p matrix of rank s r and all rows of L are in R(Z ). The likelihood function in this problem is ( ) n/2 { } l(θ) = 1 2πσ exp 1 X Z β 2, θ = (β,σ 2 ). 2 2σ 2 Since X Z β 2 X Z β 2 for any β and the LSE β, ( ) n/2 { l(θ) 1 2πσ exp 1 X Z β } σ 2 UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

7 Treating the right-hand side of this expression as a function of σ 2, it is easy to show that it has a maximum at σ 2 = σ 2 = X Z β 2 /n and supl(θ) = (2π σ 2 ) n/2 e n/2. θ Θ Similarly, let β H0 be the LSE under H 0 and σ 2 H 0 = X Z β H0 2 /n: Thus, sup l(θ) = (2π σ H 2 0 ) n/2 e n/2. θ Θ 0 λ(x) = ( σ 2 / σ 2 H 0 ) n/2 = ( X Z β 2 X Z β H0 2 ) n/2. For a two-sample problem, we let n = n 1 + n 2, β = (µ 1, µ 2 ), and ( ) Jn1 0 Z =. 0 J n2 Testing H 0 : µ 1 = µ 2 versus H 1 : µ 1 µ 2 is the same as testing H 0 : Lβ = 0 versus H 1 : Lβ 0 with L = ( 1 1 ). The LR test is the same as the two-sample two-sided t-tests in UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

8 Example: Exercise 6.84 Let F and G be two known cumulative distribution functions on R and X be a single observation from the cumulative distribution function θf(x) + (1 θ)g(x), where θ [0,1] is unknown. We first derive the likelihood ratio λ(x) for testing H 0 : θ θ 0 versus H 1 : θ > θ 0 where θ 0 [0,1) is a known constant. Let f and g be the probability densities of F and G, respectively, with respect to the measure corresponding to F + G. Then, the likelihood function is l(θ) = θ[f (X) g(x)] + g(x) and { f (X) f (X) g(x) sup l(θ) = 0 θ 1 g(x) f (X) < g(x). For θ 0 [0,1), { θ0 [f (X) g(x)] + g(x) f (X) g(x) sup l(θ) = 0 θ θ 0 g(x) f (X) < g(x). UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

9 Hence, { θ0 [f (X) g(x)]+g(x) f (X) f (X) g(x) λ(x) = 1 f (X) < g(x). Choose a constant c with θ 0 c < 1. Then λ(x) c is the same as g(x) f (X) c θ 0 1 θ 0 We may find a c with P(λ(X) c) = α when θ = θ 0. Consider next H 0 : θ 1 θ θ 2 versus H 1 : θ < θ 1 or θ > θ 2 where 0 θ 1 < θ 2 1 are known constants. For 0 θ 1 θ 2 1, { θ2 [f (X) g(x)] + g(x) f (X) g(x) sup l(θ) = 0 θ 1 θ θ 2 1 θ 1 [f (X) g(x)] + g(x) f (X) < g(x) Hence, θ 2 [f (X) g(x)]+g(x) f (X) f (X) g(x) λ(x) = f (X) < g(x). θ 1 [f (X) g(x)]+g(x) g(x) UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

10 Choose a constant c with max{1 θ 1,θ 2 } c < 1. Then λ(x) c is the same as g(x) f (X) c θ 0 1 θ 0 or g(x) f (X) θ 1 c (1 θ 1 ). How to find a c with sup θ1 θ θ 2 P(λ(X) c) = α? Finally, consider H 0 : θ θ 1 or θ θ 2 versus θ 1 < θ < θ 2 where 0 θ 1 θ 2 1 are known constants. Note that sup 0 θ θ 1,θ 2 θ 1 l(θ) = sup 0 θ 1 l(θ). Hence, λ(x) = 1 This means that, unless we consider randomizing, we cannot find a c such that sup θ θ1 or θ θ 2 P(λ(X) c) = α. UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

11 It is often difficult to construct a test with exactly size α or level α. Tests whose rejection regions are constructed using asymptotic theory (so that these tests have asymptotic level α) are called asymptotic tests, which are useful when a test of exact size α is difficult to find. Definition 2.13 (asymptotic tests) Let X = (X 1,...,X n ) be a sample from P P and T n (X) be a test for H 0 : P P 0 versus H 1 : P P 1. (i) If limsup n α Tn (P) α for any P P 0, then α is an asymptotic significance level of T n. (ii) If lim n sup P P0 α Tn (P) exists, it is called the limiting size of T n. (iii) T n is consistent iff the type II error probability converges to 0. If P 0 is not a parametric family, the limiting size of T n may be 1. This is the reason why we consider the weaker requirement in (i). If α (0,1) is a pre-assigned level of significance for the problem, then a consistent test T n having asymptotic significance level α is called asymptotically correct, and a consistent test having limiting size α is called strongly asymptotically correct. UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

12 In the i.i.d. case we can obtain the asymptotic distribution (under H 0 ) of the likelihood ratio λ(x) so that an LR test having asymptotic significance level α can be obtained. Theorem 6.5 (asymptotic distribution of likelihood ratio) Assume the conditions in Theorem Suppose that H 0 : θ = g(ϑ), where ϑ is a (k r)-vector of unknown parameters and g is a continuously differentiable function from R k r to R k with a full rank g(ϑ)/ ϑ. Under H 0, 2logλ n d χ 2 r, where λ n = λ(x) and χr 2 is a random variable having the chi-square distribution χr 2. Consequently, the LR test with rejection region λ n < e χ2 r,α/2 has asymptotic significance level α, where χr,α 2 is the (1 α)th quantile of the chi-square distribution χr 2. UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

13 Proof Without loss of generality, we assume that there exist an MLE θ and an MLE ϑ under H 0 such that λ n = sup l(θ) / supl(θ) = l(g( ϑ)) / l( θ). θ Θ 0 θ Θ Let s n (θ) = logl(θ)/ θ and I 1 (θ) be the Fisher information about θ contained in X 1. Following the proof of Theorem 4.17 in 4.5.2, we can obtain that ni1 (θ)( θ θ) = n 1/2 s n (θ) + o p (1), and 2[logl( θ) logl(θ)] = n( θ θ) τ I 1 (θ)( θ θ) + o p (1). Then 2[logl( θ) logl(θ)] = n 1 [s n (θ)] τ [I 1 (θ)] 1 s n (θ) + o p (1). Similarly, under H 0, 2[logl(g( ϑ)) logl(g(ϑ))] = n 1 [ s n (ϑ)] τ [Ĩ1(ϑ)] 1 s n (ϑ) + o p (1), UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

14 where s n (ϑ) = logl(g(ϑ))/ ϑ = D(ϑ)s n (g(ϑ)), D(ϑ) = g(ϑ)/ ϑ, and Ĩ1(ϑ) is the Fisher information about ϑ (under H 0 ) contained in X 1. Combining these results, we obtain that, under H 0, 2logλ n = 2[logl( θ) logl(g( ϑ))] = n 1 [s n (g(ϑ))] τ B(ϑ)s n (g(ϑ)) + o p (1) where B(ϑ) = [I 1 (g(ϑ))] 1 [D(ϑ)] τ [Ĩ1(ϑ)] 1 D(ϑ). By the CLT, n 1/2 [I 1 (θ)] 1/2 s n (θ) d Z, where Z = N k (0,I k ). Then, it follows from Theorem 1.10(iii) that, under H 0, 2logλ n d Z τ [I 1 (g(ϑ))] 1/2 B(ϑ)[I 1 (g(ϑ))] 1/2 Z. Let D = D(ϑ), B = B(ϑ), A = I 1 (g(ϑ)), and C = Ĩ1(ϑ). Then (A 1/2 BA 1/2 ) 2 = A 1/2 BABA 1/2 = A 1/2 (A 1 D τ C 1 D)A(A 1 D τ C 1 D)A 1/2 = (I k A 1/2 D τ C 1 DA 1/2 )(I k A 1/2 D τ C 1 DA 1/2 ) UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

15 = I k 2A 1/2 D τ C 1 DA 1/2 + A 1/2 D τ C 1 DAD τ C 1 DA 1/2 = I k A 1/2 D τ C 1 DA 1/2 = A 1/2 BA 1/2, where the fourth equality follows from the fact that C = DAD τ. This shows that A 1/2 BA 1/2 is a projection matrix. The rank of A 1/2 BA 1/2 is Thus, by Exercise 51 in 1.6, tr(a 1/2 BA 1/2 ) = tr(i k D τ C 1 DA) = k tr(c 1 DAD τ ) = k tr(c 1 C) = k (k r) = r. Z τ [I 1 (g(ϑ))] 1/2 B(ϑ)[I 1 (g(ϑ))] 1/2 Z = χ 2 r UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

16 Asymptotic tests based on likelihoods There are two popular asymptotic tests based on likelihoods that are asymptotically equivalent to LR tests. The hypothesis H 0 : θ = g(ϑ) is equivalent to a set of r k equations: H 0 : R(θ) = 0, where R(θ) is a continuously differentiable function from R k to R r. Wald (1943) introduced a test that rejects H 0 when the value of W n = [R( θ)] τ {[C( θ)] τ [I n ( θ)] 1 C( θ)} 1 R( θ) is large, where C(θ) = R(θ)/ θ, I n (θ) is the Fisher information matrix based on X 1,...,X n, and θ is an MLE or RLE of θ. Rao (1947) introduced a score test that rejects H 0 when the value of R n = [s n ( θ)] τ [I n ( θ)] 1 s n ( θ) is large, where s n (θ) = logl(θ)/ θ is the score function and θ is an MLE or RLE of θ under H 0 : R(θ) = 0. Wald s test requires computing θ, not θ = g( ϑ). Rao s score test requires computing θ, not θ. UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

17 Theorem 6.6 Assume the conditions in Theorem (i) Under H 0 : R(θ) = 0, where R(θ) is a continuously differentiable function from R k to R r, W n d χr 2 and, therefore, the test rejects H 0 if and only if W n > χr,α 2 has asymptotic significance level α, where χr,α 2 is the (1 α)th quantile of the chi-square distribution χr 2. (ii) The result in (i) still holds if W n is replaced by R n. Proof (i) Using Theorems 1.12 and 4.17, ( ) n[r( θ) R(θ)] d N r 0,[C(θ)] τ [I 1 (θ)] 1 C(θ), where I 1 (θ) is the Fisher information about θ contained in X 1. Under H 0, R(θ) = 0 and, therefore (by Theorem 1.10), n[r( θ)] τ {[C(θ)] τ [I 1 (θ)] 1 C(θ)} 1 R( θ) d χ 2 r Then the result follows from Slutsky s theorem (Theorem 1.11) and the fact that θ p θ and I 1 (θ) and C(θ) are continuous at θ. (ii) See the textbook. UW-Madison (Statistics) Stat 710, Lecture 17 Jan / 17

Lecture 26: Likelihood ratio tests

Lecture 26: Likelihood ratio tests Lecture 26: Likelihood ratio tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for

More information

Stat 710: Mathematical Statistics Lecture 27

Stat 710: Mathematical Statistics Lecture 27 Stat 710: Mathematical Statistics Lecture 27 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 27 April 3, 2009 1 / 10 Lecture 27:

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests

Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Chapter 8: Hypothesis Testing Lecture 9: Likelihood ratio tests Throughout this chapter we consider a sample X taken from a population indexed by θ Θ R k. Instead of estimating the unknown parameter, we

More information

Some General Types of Tests

Some General Types of Tests Some General Types of Tests We may not be able to find a UMP or UMPU test in a given situation. In that case, we may use test of some general class of tests that often have good asymptotic properties.

More information

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided

Let us first identify some classes of hypotheses. simple versus simple. H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided Let us first identify some classes of hypotheses. simple versus simple H 0 : θ = θ 0 versus H 1 : θ = θ 1. (1) one-sided H 0 : θ θ 0 versus H 1 : θ > θ 0. (2) two-sided; null on extremes H 0 : θ θ 1 or

More information

Stat 710: Mathematical Statistics Lecture 40

Stat 710: Mathematical Statistics Lecture 40 Stat 710: Mathematical Statistics Lecture 40 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 40 May 6, 2009 1 / 11 Lecture 40: Simultaneous

More information

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Confidence sets X: a sample from a population P P. θ = θ(p): a functional from P to Θ R k for a fixed integer k. C(X): a confidence

More information

Stat 710: Mathematical Statistics Lecture 31

Stat 710: Mathematical Statistics Lecture 31 Stat 710: Mathematical Statistics Lecture 31 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 31 April 13, 2009 1 / 13 Lecture 31:

More information

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma

Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Chapter 6. Hypothesis Tests Lecture 20: UMP tests and Neyman-Pearson lemma Theory of testing hypotheses X: a sample from a population P in P, a family of populations. Based on the observed X, we test a

More information

Stat 710: Mathematical Statistics Lecture 12

Stat 710: Mathematical Statistics Lecture 12 Stat 710: Mathematical Statistics Lecture 12 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 12 Feb 18, 2009 1 / 11 Lecture 12:

More information

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata

Testing Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

2014/2015 Smester II ST5224 Final Exam Solution

2014/2015 Smester II ST5224 Final Exam Solution 014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of

More information

Lecture 23: UMPU tests in exponential families

Lecture 23: UMPU tests in exponential families Lecture 23: UMPU tests in exponential families Continuity of the power function For a given test T, the power function β T (P) is said to be continuous in θ if and only if for any {θ j : j = 0,1,2,...}

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

Lecture 28: Asymptotic confidence sets

Lecture 28: Asymptotic confidence sets Lecture 28: Asymptotic confidence sets 1 α asymptotic confidence sets Similar to testing hypotheses, in many situations it is difficult to find a confidence set with a given confidence coefficient or level

More information

simple if it completely specifies the density of x

simple if it completely specifies the density of x 3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely

More information

Chapter 4. Theory of Tests. 4.1 Introduction

Chapter 4. Theory of Tests. 4.1 Introduction Chapter 4 Theory of Tests 4.1 Introduction Parametric model: (X, B X, P θ ), P θ P = {P θ θ Θ} where Θ = H 0 +H 1 X = K +A : K: critical region = rejection region / A: acceptance region A decision rule

More information

Lecture 20: Linear model, the LSE, and UMVUE

Lecture 20: Linear model, the LSE, and UMVUE Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3 Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest

More information

Lecture 17: Minimal sufficiency

Lecture 17: Minimal sufficiency Lecture 17: Minimal sufficiency Maximal reduction without loss of information There are many sufficient statistics for a given family P. In fact, X (the whole data set) is sufficient. If T is a sufficient

More information

Exercises Chapter 4 Statistical Hypothesis Testing

Exercises Chapter 4 Statistical Hypothesis Testing Exercises Chapter 4 Statistical Hypothesis Testing Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans December 5, 013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

1. Fisher Information

1. Fisher Information 1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score

More information

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses. Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each

More information

STAT 461/561- Assignments, Year 2015

STAT 461/561- Assignments, Year 2015 STAT 461/561- Assignments, Year 2015 This is the second set of assignment problems. When you hand in any problem, include the problem itself and its number. pdf are welcome. If so, use large fonts and

More information

Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE

Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE Eric Zivot Winter 013 1 Wald, LR and LM statistics based on generalized method of moments estimation Let 1 be an iid sample

More information

Introduction to Estimation Methods for Time Series models Lecture 2

Introduction to Estimation Methods for Time Series models Lecture 2 Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:

More information

For iid Y i the stronger conclusion holds; for our heuristics ignore differences between these notions.

For iid Y i the stronger conclusion holds; for our heuristics ignore differences between these notions. Large Sample Theory Study approximate behaviour of ˆθ by studying the function U. Notice U is sum of independent random variables. Theorem: If Y 1, Y 2,... are iid with mean µ then Yi n µ Called law of

More information

Chapter 3 : Likelihood function and inference

Chapter 3 : Likelihood function and inference Chapter 3 : Likelihood function and inference 4 Likelihood function and inference The likelihood Information and curvature Sufficiency and ancilarity Maximum likelihood estimation Non-regular models EM

More information

Review and continuation from last week Properties of MLEs

Review and continuation from last week Properties of MLEs Review and continuation from last week Properties of MLEs As we have mentioned, MLEs have a nice intuitive property, and as we have seen, they have a certain equivariance property. We will see later that

More information

Lecture 13: p-values and union intersection tests

Lecture 13: p-values and union intersection tests Lecture 13: p-values and union intersection tests p-values After a hypothesis test is done, one method of reporting the result is to report the size α of the test used to reject H 0 or accept H 0. If α

More information

Lecture 16: Sample quantiles and their asymptotic properties

Lecture 16: Sample quantiles and their asymptotic properties Lecture 16: Sample quantiles and their asymptotic properties Estimation of quantiles (percentiles Suppose that X 1,...,X n are i.i.d. random variables from an unknown nonparametric F For p (0,1, G 1 (p

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.

Final Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically

More information

Composite Hypotheses and Generalized Likelihood Ratio Tests

Composite Hypotheses and Generalized Likelihood Ratio Tests Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve

More information

Statistics Ph.D. Qualifying Exam

Statistics Ph.D. Qualifying Exam Department of Statistics Carnegie Mellon University May 7 2008 Statistics Ph.D. Qualifying Exam You are not expected to solve all five problems. Complete solutions to few problems will be preferred to

More information

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test.

LECTURE 10: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING. The last equality is provided so this can look like a more familiar parametric test. Economics 52 Econometrics Professor N.M. Kiefer LECTURE 1: NEYMAN-PEARSON LEMMA AND ASYMPTOTIC TESTING NEYMAN-PEARSON LEMMA: Lesson: Good tests are based on the likelihood ratio. The proof is easy in the

More information

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants

Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009 there were participants 18.650 Statistics for Applications Chapter 5: Parametric hypothesis testing 1/37 Cherry Blossom run (1) The credit union Cherry Blossom Run is a 10 mile race that takes place every year in D.C. In 2009

More information

Testing Algebraic Hypotheses

Testing Algebraic Hypotheses Testing Algebraic Hypotheses Mathias Drton Department of Statistics University of Chicago 1 / 18 Example: Factor analysis Multivariate normal model based on conditional independence given hidden variable:

More information

DA Freedman Notes on the MLE Fall 2003

DA Freedman Notes on the MLE Fall 2003 DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n

Recall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible

More information

Inference in non-linear time series

Inference in non-linear time series Intro LS MLE Other Erik Lindström Centre for Mathematical Sciences Lund University LU/LTH & DTU Intro LS MLE Other General Properties Popular estimatiors Overview Introduction General Properties Estimators

More information

ECE 275B Homework # 1 Solutions Version Winter 2015

ECE 275B Homework # 1 Solutions Version Winter 2015 ECE 275B Homework # 1 Solutions Version Winter 2015 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2

More information

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper

McGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section

More information

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the

Review Quiz. 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Review Quiz 1. Prove that in a one-dimensional canonical exponential family, the complete and sufficient statistic achieves the Cramér Rao lower bound (CRLB). That is, if where { } and are scalars, then

More information

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic

Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Chapter 3: Unbiased Estimation Lecture 22: UMVUE and the method of using a sufficient and complete statistic Unbiased estimation Unbiased or asymptotically unbiased estimation plays an important role in

More information

Lecture 34: Properties of the LSE

Lecture 34: Properties of the LSE Lecture 34: Properties of the LSE The following results explain why the LSE is popular. Gauss-Markov Theorem Assume a general linear model previously described: Y = Xβ + E with assumption A2, i.e., Var(E

More information

Lecture 21: October 19

Lecture 21: October 19 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 21: October 19 21.1 Likelihood Ratio Test (LRT) To test composite versus composite hypotheses the general method is to use

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

ECE 275B Homework # 1 Solutions Winter 2018

ECE 275B Homework # 1 Solutions Winter 2018 ECE 275B Homework # 1 Solutions Winter 2018 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2 < < x n Thus,

More information

Classical Estimation Topics

Classical Estimation Topics Classical Estimation Topics Namrata Vaswani, Iowa State University February 25, 2014 This note fills in the gaps in the notes already provided (l0.pdf, l1.pdf, l2.pdf, l3.pdf, LeastSquares.pdf). 1 Min

More information

P n. This is called the law of large numbers but it comes in two forms: Strong and Weak.

P n. This is called the law of large numbers but it comes in two forms: Strong and Weak. Large Sample Theory Large Sample Theory is a name given to the search for approximations to the behaviour of statistical procedures which are derived by computing limits as the sample size, n, tends to

More information

Probability and Statistics qualifying exam, May 2015

Probability and Statistics qualifying exam, May 2015 Probability and Statistics qualifying exam, May 2015 Name: Instructions: 1. The exam is divided into 3 sections: Linear Models, Mathematical Statistics and Probability. You must pass each section to pass

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Lecture 1: Introduction

Lecture 1: Introduction Principles of Statistics Part II - Michaelmas 208 Lecturer: Quentin Berthet Lecture : Introduction This course is concerned with presenting some of the mathematical principles of statistical theory. One

More information

ST5215: Advanced Statistical Theory

ST5215: Advanced Statistical Theory Department of Statistics & Applied Probability Monday, September 26, 2011 Lecture 10: Exponential families and Sufficient statistics Exponential Families Exponential families are important parametric families

More information

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources

STA 732: Inference. Notes 10. Parameter Estimation from a Decision Theoretic Angle. Other resources STA 732: Inference Notes 10. Parameter Estimation from a Decision Theoretic Angle Other resources 1 Statistical rules, loss and risk We saw that a major focus of classical statistics is comparing various

More information

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.

Unbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Semiparametric posterior limits

Semiparametric posterior limits Statistics Department, Seoul National University, Korea, 2012 Semiparametric posterior limits for regular and some irregular problems Bas Kleijn, KdV Institute, University of Amsterdam Based on collaborations

More information

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4

STA 732: Inference. Notes 2. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 STA 73: Inference Notes. Neyman-Pearsonian Classical Hypothesis Testing B&D 4 1 Testing as a rule Fisher s quantification of extremeness of observed evidence clearly lacked rigorous mathematical interpretation.

More information

p y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise.

p y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise. 1. Suppose Y 1, Y 2,..., Y n is an iid sample from a Bernoulli(p) population distribution, where 0 < p < 1 is unknown. The population pmf is p y (1 p) 1 y, y = 0, 1 p Y (y p) = (a) Prove that Y is the

More information

Advanced Quantitative Methods: maximum likelihood

Advanced Quantitative Methods: maximum likelihood Advanced Quantitative Methods: Maximum Likelihood University College Dublin 4 March 2014 1 2 3 4 5 6 Outline 1 2 3 4 5 6 of straight lines y = 1 2 x + 2 dy dx = 1 2 of curves y = x 2 4x + 5 of curves y

More information

Information geometry of mirror descent

Information geometry of mirror descent Information geometry of mirror descent Geometric Science of Information Anthea Monod Department of Statistical Science Duke University Information Initiative at Duke G. Raskutti (UW Madison) and S. Mukherjee

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Asymmetric least squares estimation and testing

Asymmetric least squares estimation and testing Asymmetric least squares estimation and testing Whitney Newey and James Powell Princeton University and University of Wisconsin-Madison January 27, 2012 Outline ALS estimators Large sample properties Asymptotic

More information

STA216: Generalized Linear Models. Lecture 1. Review and Introduction

STA216: Generalized Linear Models. Lecture 1. Review and Introduction STA216: Generalized Linear Models Lecture 1. Review and Introduction Let y 1,..., y n denote n independent observations on a response Treat y i as a realization of a random variable Y i In the general

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

10. Composite Hypothesis Testing. ECE 830, Spring 2014

10. Composite Hypothesis Testing. ECE 830, Spring 2014 10. Composite Hypothesis Testing ECE 830, Spring 2014 1 / 25 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve unknown parameters

More information

Statistics. Lecture 4 August 9, 2000 Frank Porter Caltech. 1. The Fundamentals; Point Estimation. 2. Maximum Likelihood, Least Squares and All That

Statistics. Lecture 4 August 9, 2000 Frank Porter Caltech. 1. The Fundamentals; Point Estimation. 2. Maximum Likelihood, Least Squares and All That Statistics Lecture 4 August 9, 2000 Frank Porter Caltech The plan for these lectures: 1. The Fundamentals; Point Estimation 2. Maximum Likelihood, Least Squares and All That 3. What is a Confidence Interval?

More information

The outline for Unit 3

The outline for Unit 3 The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.

More information

Lecture 21. Hypothesis Testing II

Lecture 21. Hypothesis Testing II Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric

More information

1 Likelihood. 1.1 Likelihood function. Likelihood & Maximum Likelihood Estimators

1 Likelihood. 1.1 Likelihood function. Likelihood & Maximum Likelihood Estimators Likelihood & Maximum Likelihood Estimators February 26, 2018 Debdeep Pati 1 Likelihood Likelihood is surely one of the most important concepts in statistical theory. We have seen the role it plays in sufficiency,

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Chapter 9: Interval Estimation and Confidence Sets Lecture 16: Confidence sets and credible sets

Chapter 9: Interval Estimation and Confidence Sets Lecture 16: Confidence sets and credible sets Chapter 9: Interval Estimation and Confidence Sets Lecture 16: Confidence sets and credible sets Confidence sets We consider a sample X from a population indexed by θ Θ R k. We are interested in ϑ, a vector-valued

More information

ML Testing (Likelihood Ratio Testing) for non-gaussian models

ML Testing (Likelihood Ratio Testing) for non-gaussian models ML Testing (Likelihood Ratio Testing) for non-gaussian models Surya Tokdar ML test in a slightly different form Model X f (x θ), θ Θ. Hypothesist H 0 : θ Θ 0 Good set: B c (x) = {θ : l x (θ) max θ Θ l

More information

TUTORIAL 8 SOLUTIONS #

TUTORIAL 8 SOLUTIONS # TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level

More information

Fisher Information & Efficiency

Fisher Information & Efficiency Fisher Information & Efficiency Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA 1 Introduction Let f(x θ) be the pdf of X for θ Θ; at times we will also consider a

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two

More information

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003

Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer

More information

Lecture 12 November 3

Lecture 12 November 3 STATS 300A: Theory of Statistics Fall 2015 Lecture 12 November 3 Lecturer: Lester Mackey Scribe: Jae Hyuck Park, Christian Fong Warning: These notes may contain factual and/or typographic errors. 12.1

More information

Information in a Two-Stage Adaptive Optimal Design

Information in a Two-Stage Adaptive Optimal Design Information in a Two-Stage Adaptive Optimal Design Department of Statistics, University of Missouri Designed Experiments: Recent Advances in Methods and Applications DEMA 2011 Isaac Newton Institute for

More information

f(y θ) = g(t (y) θ)h(y)

f(y θ) = g(t (y) θ)h(y) EXAM3, FINAL REVIEW (and a review for some of the QUAL problems): No notes will be allowed, but you may bring a calculator. Memorize the pmf or pdf f, E(Y ) and V(Y ) for the following RVs: 1) beta(δ,

More information

1 Hypothesis Testing and Model Selection

1 Hypothesis Testing and Model Selection A Short Course on Bayesian Inference (based on An Introduction to Bayesian Analysis: Theory and Methods by Ghosh, Delampady and Samanta) Module 6: From Chapter 6 of GDS 1 Hypothesis Testing and Model Selection

More information

Pivotal Quantities. Mathematics 47: Lecture 16. Dan Sloughter. Furman University. March 30, 2006

Pivotal Quantities. Mathematics 47: Lecture 16. Dan Sloughter. Furman University. March 30, 2006 Pivotal Quantities Mathematics 47: Lecture 16 Dan Sloughter Furman University March 30, 2006 Dan Sloughter (Furman University) Pivotal Quantities March 30, 2006 1 / 10 Pivotal quantities Definition Suppose

More information

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1

Chapter 4 HOMEWORK ASSIGNMENTS. 4.1 Homework #1 Chapter 4 HOMEWORK ASSIGNMENTS These homeworks may be modified as the semester progresses. It is your responsibility to keep up to date with the correctly assigned homeworks. There may be some errors in

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

Principles of Statistics

Principles of Statistics Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 81 Paper 4, Section II 28K Let g : R R be an unknown function, twice continuously differentiable with g (x) M for

More information

Hypothesis testing: theory and methods

Hypothesis testing: theory and methods Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable

More information

Loglikelihood and Confidence Intervals

Loglikelihood and Confidence Intervals Stat 504, Lecture 2 1 Loglikelihood and Confidence Intervals The loglikelihood function is defined to be the natural logarithm of the likelihood function, l(θ ; x) = log L(θ ; x). For a variety of reasons,

More information