Theoretical Statistics. Lecture 23.
|
|
- Merilyn Skinner
- 6 years ago
- Views:
Transcription
1 Theoretical Statistics. Lecture 23. Peter Bartlett 1. Recall: QMD and local asymptotic normality. [vdv7] 2. Convergence of experiments, maximum likelihood. 3. Relative efficiency of tests. [vdv14] 1
2 Local asymptotic normality We ve seen that, for a QMD model P θ, the log likelihood ratio, log dpn θ 0 +h/ n dp n θ 0 (X i ), is asymptotically normal. This is useful for: 1. Comparing null θ 0 and shrinking alternative θ 0 +h/ n with a likelihood ratio test. 2. Understanding the local behavior of a statistict n. If we assume that θ is fixed, and we understand T n s asymptotics under P θ, we can use the asymptotics of the log likelihood ratio to understand the asymptotics oft n in a local neighborhood ofθ. The appropriate local scale is typically1/ n. 2
3 Recall: QMD and local asymptotic normality Theorem: If Θ is an open subset of R k, and P θ is QMD at θ Θ, then 1. P θ lθ = I θ = P θ lθ l T θ exists. 3. For every h n satisfying nh n h, log n i=1 p θ+hn p θ (X i ) = 1 n n θ N i=1 h T lθ (X i ) 1 2 ht I θ h+o Pθ (1) ( 1 ) 2 ht I θ h,h T I θ h. 3
4 Recall: Quadratic mean differentiability Definition: The root density θ p θ (for θ R k ) is differentiable in quadratic mean at θ if there exists a vector-valued measurable function l θ : X R k such that, forh 0, ( pθ+h p θ 1 ) 2 2 ht lθ pθ dµ = o( h 2 ). 4
5 Recall: Asymptotically linear statistics Suppose the model {P θ : θ Θ} is QMD, and a statistic T n satisfies n(tn µ θ ) = 1 n n i=1 ψ θ (X i )+o Pθ (1), where P θ ψ θ = 0 andp θ ψ θ ψθ T = Σ. Then for nh n h, ( ) n(tn µ θ ),log dpn θ+h n θ 0 dpθ n N, Σ 1 2 ht I θ h τ T τ, h T I θ h where τ = P θ ψ θ h T lθ. So n(t n µ θ ) θ+h n N ( ) P θ ψ θ h T lθ,σ. 5
6 Asymptotically linear statistics That is, we know that under θ, n(tn µ θ ) θ N(0,Σ). And we can use the asymptotics of the log likelihood ratio to determine the asymptotics of this statistic under the shrinking alternative θ +h/ n: n(tn µ θ ) θ+h/ n N ( ) P θ ψ θ h T lθ,σ. 6
7 Asymptotically linear statistics: Example Location families: Suppose that p θ (x) = f(x θ), where f is positive, continuously differentiable, and satisfies µ = xf(x)dx = 0, σ 2 = x 2 f(x)dx <, I θ = ( f (x) f(x) ) 2 f(x)dx <. This family is QMD. 7
8 Asymptotically linear statistics: Example 1. Consider the t-statistic for the null hypothesisθ = 0, T n = 1 n n i=1 X i S n ntn = 1 n n i=1 X i σ +o P 0 (1). Thus,T n is an asymptotically linear statistic, with ψ θ (x) = x σ, l θ (x) = f (x θ) f(x θ). 8
9 Asymptotically linear statistics: Example Hence, forh n satisfying nh n h, ( ) h ntn n N P 0 ψ 0 h l 0,P 0 ψ0 2, P 0 ψ 0 h l 0 = P 0 X σ f (X) f(x) h = h σ xf (x)dx = h σ f(x)dx = h σ. P 0 ψ 2 0 = 1 σ 2P 0X 2 = 1. ntn h n N ( h σ,1 ). 9
10 Asymptotically linear statistics: Example 2. Suppose that P 0 (X > 0) = 1/2 and consider the sign statistic for the null hypothesis θ = 0, s n = 1 n ( 1[X i > 0] 1 ). n 2 i=1 Thus,s n is an asymptotically linear statistic, with ψ θ (x) = 1[x > 0] P θ (X > 0), l θ (x) = f (x θ) f(x θ). Hence, forh n satisfying nh n h, ( ) h nsn n N P 0 ψ 0 h l 0,P 0 ψ0 2 10
11 Asymptotically linear statistics: Example ( P 0 ψ 0 h l 0 = P 0 1[X > 0] 1 ) f (X) 2 f(x) h ) = h = h 2 ( 1[x > 0] 1 2 ( 0 f (x)dx P 0 ψ0 2 = 1 4. ( h nsn n N hf(0), 1 ). 4 0 f (x)dx ) f (x)dx = hf(0). 11
12 Convergence of local statistical experiments ( Theorem: If Pθ : θ Θ R k) is QMD at θ with nonsingular Fisher information I θ, T n are statistics in the local experiments ( Pθ+h/ n : h R k) h, and for everyhthere is a lawl h s.t.t n Lh. Then there is a randomized statistic T in the experiment ( N ( h,i 1 ) ) θ : h R k such that for each h,t n h T. The proof uses the Le Cam lemmas (change of measure via the asymptotically normal log-likelihood ratio) 12
13 Convergence of local statistical experiments For the local statistical experiment, ( P n θ+h/ n : h Rk), think ofθ as a particular parameter value, and θ +h/ n as a nearby value. We are interested in the asymptotic behavior of statistics when the parameter is near the value θ. Motivation: IfT n defines a test, then the power P h (T n > c) depends on the law of T n, so we can study its asymptotics via statistics in a normal experiment. IfT n is an estimator, then we can study the asymptotics of the expected squared error E h (T n h) 2 via statistics in a normal experiment. 13
14 Maximum likelihood Consider the maximum likelihood estimator T n = ĥn for the local experiment ( P n θ+h/ n : h Rk). (Notice that ĥn = n(ˆθ n θ).) Typically, the matching asymptotic statistic in the limit experiment is the maximum likelihood estimator T = X N ( h,i 1 ) θ. So we expect the asymptotic distribution of n(ˆθn θ) to be N ( 0,I 1 ) θ under θ. Note that the previous theorem does not imply that this particular statistic in the limit experiment (the maximum likelihood estimator) is the weak limit of the T n. This needs some additional conditions. 14
15 Maximum likelihood Theorem: Suppose 1. (P θ : θ Θ) is QMD at θ with nonsingular Fisher information I θ, 2. for every x,θ logp θ (x) is Lipschitz, and 3. the maximum likelihood estimator ˆθ n is consistent. Then n(ˆθn θ) θ N(0,I 1 θ ). 15
16 Relative efficiency of tests Example: Suppose X 1,...,X n P θ, where 1. P θ has density f(x θ) onr, 2. f is symmetric about zero (so the mean=median of P θ isθ), 3. f has a unique median (f(0) 0), 4. f has a finite variance. We wish to test H 0 : θ = 0 versus H 1 : θ > 0. 16
17 Relative efficiency of tests Example: Candidate tests: 1. Sign test: S n = 1 n 1[X i > 0]. n 2. t-test: T n = 1 n Which is better? n i=1 i=1 X i S n. 17
18 Relative efficiency of tests: sign test Thus, S n = 1 n n 1[X i > 0]. i=1 n σ(θ) (S n µ(θ)) N(0,1), where 2 n µ(θ) = 1 F( θ), σ 2 (θ) = (1 F( θ))f( θ). ( S n 1 ) 0 N(0,1). 2 Reject H 0 if2 n(s n 1/2) > z α. 18
19 Relative efficiency of tests: sign test Definition: The power function of a test that rejects the null hypothesis when the statistict n falls in the critical region K n is π n (θ) = P θ (T n K n ). For the sign test, ( ) π n (θ) = P θ n(sn µ(0)) > σ(0)z αn ( n = P θ σ(θ) (S n µ(θ)) > σ(0)z ) α n + n(µ(0) µ(θ)) σ(θ) ( σ(0)zαn + ) n(µ(0) µ(θ)) = 1 Φ +o(1). σ(θ) 19
20 Relative efficiency of tests: sign test Forθ = 0, we have π n (0) = 1 Φ(z αn ) = α n. Forθ > 0,µ(0) µ(θ) = F( θ) F(0) < 0. Provided α n 0 sufficiently slowly, ( σ(0)zαn + ) n(µ(0) µ(θ)) π n (θ) = 1 Φ σ(θ) 0 if θ = 0, 1 if θ > 0. +o(1) So the limiting power function is perfect. This is typical: any reasonable test can distinguish a fixed alternative, given unlimited data. 20
21 Relative efficiency of tests So how do we compare tests? We need to make the problem of discriminating between the null and the alternative more difficult asn increases. It is natural to consider a shrinking alternative, that converges to the null. Recall our example: We wish to test H 0 : θ = 0 versus H 1 : θ n > 0, with θ n 0. 21
22 Relative efficiency of tests For the sign test, ( σ(0)zα + ) n(µ(0) µ(θ n )) π n (θ n ) = 1 Φ σ(θ n ) +o(1). The level of the test converges: What about the power? π n (0) = 1 Φ(z α )+o(1) α. It depends on the asymptotics of n(µ(0) µ(θ n )). SinceF is differentiable at 0, n(µ(0) µ(θn )) = n(f( θ n ) F(0)) = nθ n f(0)+o( nθ n ). 22
23 Relative efficiency of tests If θ n θ faster than 1/ n, n(µ(0) µ(θ n )) 0, soπ n (θ n ) α. The test fails: these alternatives are too hard. Forθ n θ slower than 1/ n, n(µ(0) µ(θ n )), so π n (θ n ) 1. These slowly shrinking alternatives are too easy. Consider an intermediate rate: nθn h. 23
Theoretical Statistics. Lecture 22.
Theoretical Statistics. Lecture 22. Peter Bartlett 1. Recall: Asymptotic testing. 2. Quadratic mean differentiability. 3. Local asymptotic normality. [vdv7] 1 Recall: Asymptotic testing Consider the asymptotics
More informationTheoretical Statistics. Lecture 25.
Theoretical Statistics. Lecture 25. Peter Bartlett 1. Relative efficiency of tests [vdv14]: Rescaling rates. 2. Likelihood ratio tests [vdv15]. 1 Recall: Relative efficiency of tests Theorem: Suppose that
More informationTheoretical Statistics. Lecture 17.
Theoretical Statistics. Lecture 17. Peter Bartlett 1. Asymptotic normality of Z-estimators: classical conditions. 2. Asymptotic equicontinuity. 1 Recall: Delta method Theorem: Supposeφ : R k R m is differentiable
More informationRecall that in order to prove Theorem 8.8, we argued that under certain regularity conditions, the following facts are true under H 0 : 1 n
Chapter 9 Hypothesis Testing 9.1 Wald, Rao, and Likelihood Ratio Tests Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ θ 0. The likelihood-based results of Chapter 8 give rise to several possible
More informationEconomics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,
Economics 520 Lecture Note 9: Hypothesis Testing via the Neyman-Pearson Lemma CB 8., 8.3.-8.3.3 Uniformly Most Powerful Tests and the Neyman-Pearson Lemma Let s return to the hypothesis testing problem
More informationsimple if it completely specifies the density of x
3. Hypothesis Testing Pure significance tests Data x = (x 1,..., x n ) from f(x, θ) Hypothesis H 0 : restricts f(x, θ) Are the data consistent with H 0? H 0 is called the null hypothesis simple if it completely
More informationChapter 7. Hypothesis Testing
Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random
More informationTheoretical Statistics. Lecture 19.
Theoretical Statistics. Lecture 19. Peter Bartlett 1. Functional delta method. [vdv20] 2. Differentiability in normed spaces: Hadamard derivatives. [vdv20] 3. Quantile estimates. [vdv21] 1 Recall: Delta
More informationLecture 3 January 16
Stats 3b: Theory of Statistics Winter 28 Lecture 3 January 6 Lecturer: Yu Bai/John Duchi Scribe: Shuangning Li, Theodor Misiakiewicz Warning: these notes may contain factual errors Reading: VDV Chater
More informationTopic 15: Simple Hypotheses
Topic 15: November 10, 2009 In the simplest set-up for a statistical hypothesis, we consider two values θ 0, θ 1 in the parameter space. We write the test as H 0 : θ = θ 0 versus H 1 : θ = θ 1. H 0 is
More informationCh. 5 Hypothesis Testing
Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,
More informationHypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes
Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationHypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3
Hypothesis Testing CB: chapter 8; section 0.3 Hypothesis: statement about an unknown population parameter Examples: The average age of males in Sweden is 7. (statement about population mean) The lowest
More informationFinal Exam. 1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given.
1. (6 points) True/False. Please read the statements carefully, as no partial credit will be given. (a) If X and Y are independent, Corr(X, Y ) = 0. (b) (c) (d) (e) A consistent estimator must be asymptotically
More informationStatistics. Statistics
The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,
More informationSummary of Chapters 7-9
Summary of Chapters 7-9 Chapter 7. Interval Estimation 7.2. Confidence Intervals for Difference of Two Means Let X 1,, X n and Y 1, Y 2,, Y m be two independent random samples of sizes n and m from two
More information7. Estimation and hypothesis testing. Objective. Recommended reading
7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing
More informationFinal Examination Statistics 200C. T. Ferguson June 11, 2009
Final Examination Statistics 00C T. Ferguson June, 009. (a) Define: X n converges in probability to X. (b) Define: X m converges in quadratic mean to X. (c) Show that if X n converges in quadratic mean
More informationTheoretical Statistics. Lecture 1.
1. Organizational issues. 2. Overview. 3. Stochastic convergence. Theoretical Statistics. Lecture 1. eter Bartlett 1 Organizational Issues Lectures: Tue/Thu 11am 12:30pm, 332 Evans. eter Bartlett. bartlett@stat.
More informationSemiparametric posterior limits
Statistics Department, Seoul National University, Korea, 2012 Semiparametric posterior limits for regular and some irregular problems Bas Kleijn, KdV Institute, University of Amsterdam Based on collaborations
More informationSampling distribution of GLM regression coefficients
Sampling distribution of GLM regression coefficients Patrick Breheny February 5 Patrick Breheny BST 760: Advanced Regression 1/20 Introduction So far, we ve discussed the basic properties of the score,
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More informationf(x θ)dx with respect to θ. Assuming certain smoothness conditions concern differentiating under the integral the integral sign, we first obtain
0.1. INTRODUCTION 1 0.1 Introduction R. A. Fisher, a pioneer in the development of mathematical statistics, introduced a measure of the amount of information contained in an observaton from f(x θ). Fisher
More informationBIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation
BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)
More informationMaximum Likelihood Estimation
Chapter 8 Maximum Likelihood Estimation 8. Consistency If X is a random variable (or vector) with density or mass function f θ (x) that depends on a parameter θ, then the function f θ (X) viewed as a function
More informationDA Freedman Notes on the MLE Fall 2003
DA Freedman Notes on the MLE Fall 2003 The object here is to provide a sketch of the theory of the MLE. Rigorous presentations can be found in the references cited below. Calculus. Let f be a smooth, scalar
More information40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology
Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson
More informationHypothesis Testing Chap 10p460
Hypothesis Testing Chap 1p46 Elements of a statistical test p462 - Null hypothesis - Alternative hypothesis - Test Statistic - Rejection region Rejection Region p462 The rejection region (RR) specifies
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationSTA 732: Inference. Notes 4. General Classical Methods & Efficiency of Tests B&D 2, 4, 5
STA 732: Inference Notes 4. General Classical Methods & Efficiency of Tests B&D 2, 4, 5 1 Going beyond ML with ML type methods 1.1 When ML fails Example. Is LSAT correlated with GPA? Suppose we want to
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved
More informationMISCELLANEOUS TOPICS RELATED TO LIKELIHOOD. Copyright c 2012 (Iowa State University) Statistics / 30
MISCELLANEOUS TOPICS RELATED TO LIKELIHOOD Copyright c 2012 (Iowa State University) Statistics 511 1 / 30 INFORMATION CRITERIA Akaike s Information criterion is given by AIC = 2l(ˆθ) + 2k, where l(ˆθ)
More informationPartitioning the Parameter Space. Topic 18 Composite Hypotheses
Topic 18 Composite Hypotheses Partitioning the Parameter Space 1 / 10 Outline Partitioning the Parameter Space 2 / 10 Partitioning the Parameter Space Simple hypotheses limit us to a decision between one
More informationReview. December 4 th, Review
December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter
More informationParametric Techniques
Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure
More informationTheory of Statistical Tests
Ch 9. Theory of Statistical Tests 9.1 Certain Best Tests How to construct good testing. For simple hypothesis H 0 : θ = θ, H 1 : θ = θ, Page 1 of 100 where Θ = {θ, θ } 1. Define the best test for H 0 H
More informationTesting Hypothesis. Maura Mezzetti. Department of Economics and Finance Università Tor Vergata
Maura Department of Economics and Finance Università Tor Vergata Hypothesis Testing Outline It is a mistake to confound strangeness with mystery Sherlock Holmes A Study in Scarlet Outline 1 The Power Function
More informationTesting Algebraic Hypotheses
Testing Algebraic Hypotheses Mathias Drton Department of Statistics University of Chicago 1 / 18 Example: Factor analysis Multivariate normal model based on conditional independence given hidden variable:
More informationNumerical Sequences and Series
Numerical Sequences and Series Written by Men-Gen Tsai email: b89902089@ntu.edu.tw. Prove that the convergence of {s n } implies convergence of { s n }. Is the converse true? Solution: Since {s n } is
More informationIntroduction to Machine Learning. Lecture 2
Introduction to Machine Learning Lecturer: Eran Halperin Lecture 2 Fall Semester Scribe: Yishay Mansour Some of the material was not presented in class (and is marked with a side line) and is given for
More informationTheoretical Statistics. Lecture 14.
Theoretical Statistics. Lecture 14. Peter Bartlett Metric entropy. 1. Chaining: Dudley s entropy integral 1 Recall: Sub-Gaussian processes Definition: A stochastic process θ X θ with indexing set T is
More informationOptimization. The value x is called a maximizer of f and is written argmax X f. g(λx + (1 λ)y) < λg(x) + (1 λ)g(y) 0 < λ < 1; x, y X.
Optimization Background: Problem: given a function f(x) defined on X, find x such that f(x ) f(x) for all x X. The value x is called a maximizer of f and is written argmax X f. In general, argmax X f may
More informationParametric Techniques Lecture 3
Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to
More informationTHE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS
THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS MATH853: Linear Algebra, Probability and Statistics May 5, 05 9:30a.m. :30p.m. Only approved calculators as announced by the Examinations Secretary
More informationHypothesis testing: theory and methods
Statistical Methods Warsaw School of Economics November 3, 2017 Statistical hypothesis is the name of any conjecture about unknown parameters of a population distribution. The hypothesis should be verifiable
More informationLecture Testing Hypotheses: The Neyman-Pearson Paradigm
Math 408 - Mathematical Statistics Lecture 29-30. Testing Hypotheses: The Neyman-Pearson Paradigm April 12-15, 2013 Konstantin Zuev (USC) Math 408, Lecture 29-30 April 12-15, 2013 1 / 12 Agenda Example:
More informationDefinition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution.
Hypothesis Testing Definition 3.1 A statistical hypothesis is a statement about the unknown values of the parameters of the population distribution. Suppose the family of population distributions is indexed
More informationMIT Spring 2016
MIT 18.655 Dr. Kempthorne Spring 2016 1 MIT 18.655 Outline 1 2 MIT 18.655 Decision Problem: Basic Components P = {P θ : θ Θ} : parametric model. Θ = {θ}: Parameter space. A{a} : Action space. L(θ, a) :
More informationHT Introduction. P(X i = x i ) = e λ λ x i
MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework
More informationChapters 10. Hypothesis Testing
Chapters 10. Hypothesis Testing Some examples of hypothesis testing 1. Toss a coin 100 times and get 62 heads. Is this coin a fair coin? 2. Is the new treatment on blood pressure more effective than the
More informationMaximum Likelihood Estimation
Chapter 7 Maximum Likelihood Estimation 7. Consistency If X is a random variable (or vector) with density or mass function f θ (x) that depends on a parameter θ, then the function f θ (X) viewed as a function
More informationSTAT 830 Bayesian Estimation
STAT 830 Bayesian Estimation Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Bayesian Estimation STAT 830 Fall 2011 1 / 23 Purposes of These
More information2014/2015 Smester II ST5224 Final Exam Solution
014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of
More informationA Very Brief Summary of Statistical Inference, and Examples
A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2008 Prof. Gesine Reinert 1 Data x = x 1, x 2,..., x n, realisations of random variables X 1, X 2,..., X n with distribution (model)
More informationTo appear in The American Statistician vol. 61 (2007) pp
How Can the Score Test Be Inconsistent? David A Freedman ABSTRACT: The score test can be inconsistent because at the MLE under the null hypothesis the observed information matrix generates negative variance
More informationLoglikelihood and Confidence Intervals
Stat 504, Lecture 2 1 Loglikelihood and Confidence Intervals The loglikelihood function is defined to be the natural logarithm of the likelihood function, l(θ ; x) = log L(θ ; x). For a variety of reasons,
More informationSTAB57: Quiz-1 Tutorial 1 (Show your work clearly) 1. random variable X has a continuous distribution for which the p.d.f.
STAB57: Quiz-1 Tutorial 1 1. random variable X has a continuous distribution for which the p.d.f. is as follows: { kx 2.5 0 < x < 1 f(x) = 0 otherwise where k > 0 is a constant. (a) (4 points) Determine
More informationInformation in a Two-Stage Adaptive Optimal Design
Information in a Two-Stage Adaptive Optimal Design Department of Statistics, University of Missouri Designed Experiments: Recent Advances in Methods and Applications DEMA 2011 Isaac Newton Institute for
More informationProbability Theory and Statistics. Peter Jochumzen
Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................
More informationLikelihood-based inference with missing data under missing-at-random
Likelihood-based inference with missing data under missing-at-random Jae-kwang Kim Joint work with Shu Yang Department of Statistics, Iowa State University May 4, 014 Outline 1. Introduction. Parametric
More informationSTAT 135 Lab 5 Bootstrapping and Hypothesis Testing
STAT 135 Lab 5 Bootstrapping and Hypothesis Testing Rebecca Barter March 2, 2015 The Bootstrap Bootstrap Suppose that we are interested in estimating a parameter θ from some population with members x 1,...,
More informationBetter Bootstrap Confidence Intervals
by Bradley Efron University of Washington, Department of Statistics April 12, 2012 An example Suppose we wish to make inference on some parameter θ T (F ) (e.g. θ = E F X ), based on data We might suppose
More informationOptimal Estimation of a Nonsmooth Functional
Optimal Estimation of a Nonsmooth Functional T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania http://stat.wharton.upenn.edu/ tcai Joint work with Mark Low 1 Question Suppose
More informationSTAT 135 Lab 3 Asymptotic MLE and the Method of Moments
STAT 135 Lab 3 Asymptotic MLE and the Method of Moments Rebecca Barter February 9, 2015 Maximum likelihood estimation (a reminder) Maximum likelihood estimation Suppose that we have a sample, X 1, X 2,...,
More informationLecture 21: October 19
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 21: October 19 21.1 Likelihood Ratio Test (LRT) To test composite versus composite hypotheses the general method is to use
More informationComposite Hypotheses and Generalized Likelihood Ratio Tests
Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve
More informationOutline. 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks
Outline 1. Define likelihood 2. Interpretations of likelihoods 3. Likelihood plots 4. Maximum likelihood 5. Likelihood ratio benchmarks Likelihood A common and fruitful approach to statistics is to assume
More informationSpring 2012 Math 541B Exam 1
Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote
More informationStatistics and econometrics
1 / 36 Slides for the course Statistics and econometrics Part 10: Asymptotic hypothesis testing European University Institute Andrea Ichino September 8, 2014 2 / 36 Outline Why do we need large sample
More informationHypothesis Testing One Sample Tests
STATISTICS Lecture no. 13 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 12. 1. 2010 Tests on Mean of a Normal distribution Tests on Variance of a Normal
More informationLecture notes on statistical decision theory Econ 2110, fall 2013
Lecture notes on statistical decision theory Econ 2110, fall 2013 Maximilian Kasy March 10, 2014 These lecture notes are roughly based on Robert, C. (2007). The Bayesian choice: from decision-theoretic
More informationMathematical statistics
October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:
More informationWald s theorem and the Asimov data set
Wald s theorem and the Asimov data set Eilam Gross & Ofer Vitells ATLAS statistics forum, Dec. 009 1 Outline We have previously guessed that the median significance of many toy MC experiments could be
More informationIntroduction to Bayesian Methods. Introduction to Bayesian Methods p.1/??
to Bayesian Methods Introduction to Bayesian Methods p.1/?? We develop the Bayesian paradigm for parametric inference. To this end, suppose we conduct (or wish to design) a study, in which the parameter
More informationDesign of Engineering Experiments
Design of Engineering Experiments Hussam Alshraideh Chapter 2: Some Basic Statistical Concepts October 4, 2015 Hussam Alshraideh (JUST) Basic Stats October 4, 2015 1 / 29 Overview 1 Introduction Basic
More informationSection 10: Role of influence functions in characterizing large sample efficiency
Section 0: Role of influence functions in characterizing large sample efficiency. Recall that large sample efficiency (of the MLE) can refer only to a class of regular estimators. 2. To show this here,
More informationPredictive Distributions
Predictive Distributions October 6, 2010 Hoff Chapter 4 5 October 5, 2010 Prior Predictive Distribution Before we observe the data, what do we expect the distribution of observations to be? p(y i ) = p(y
More informationNuisance parameters and their treatment
BS2 Statistical Inference, Lecture 2, Hilary Term 2008 April 2, 2008 Ancillarity Inference principles Completeness A statistic A = a(x ) is said to be ancillary if (i) The distribution of A does not depend
More informationModel Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao
Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley
More information4.5.1 The use of 2 log Λ when θ is scalar
4.5. ASYMPTOTIC FORM OF THE G.L.R.T. 97 4.5.1 The use of 2 log Λ when θ is scalar Suppose we wish to test the hypothesis NH : θ = θ where θ is a given value against the alternative AH : θ θ on the basis
More informationApplied Statistics Lecture Notes
Applied Statistics Lecture Notes Kosuke Imai Department of Politics Princeton University February 2, 2008 Making statistical inferences means to learn about what you do not observe, which is called parameters,
More informationHypothesis Testing: The Generalized Likelihood Ratio Test
Hypothesis Testing: The Generalized Likelihood Ratio Test Consider testing the hypotheses H 0 : θ Θ 0 H 1 : θ Θ \ Θ 0 Definition: The Generalized Likelihood Ratio (GLR Let L(θ be a likelihood for a random
More informationBrief Review on Estimation Theory
Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on
More informationStatistics Ph.D. Qualifying Exam: Part I October 18, 2003
Statistics Ph.D. Qualifying Exam: Part I October 18, 2003 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. 1 2 3 4 5 6 7 8 9 10 11 12 2. Write your answer
More informationLecture 21. Hypothesis Testing II
Lecture 21. Hypothesis Testing II December 7, 2011 In the previous lecture, we dened a few key concepts of hypothesis testing and introduced the framework for parametric hypothesis testing. In the parametric
More information2018 2019 1 9 sei@mistiu-tokyoacjp http://wwwstattu-tokyoacjp/~sei/lec-jhtml 11 552 3 0 1 2 3 4 5 6 7 13 14 33 4 1 4 4 2 1 1 2 2 1 1 12 13 R?boxplot boxplotstats which does the computation?boxplotstats
More informationPenalized D-optimal design for dose finding
Penalized for dose finding Luc Pronzato Laboratoire I3S, CNRS-Univ. Nice Sophia Antipolis, France 2/ 35 Outline 3/ 35 Bernoulli-type experiments: Y i {0,1} (success or failure) η(x,θ) = Prob(Y i = 1 x
More informationStatistical Inference
Statistical Inference Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Week 12. Testing and Kullback-Leibler Divergence 1. Likelihood Ratios Let 1, 2, 2,...
More informationINTRODUCTION TO BAYESIAN METHODS II
INTRODUCTION TO BAYESIAN METHODS II Abstract. We will revisit point estimation and hypothesis testing from the Bayesian perspective.. Bayes estimators Let X = (X,..., X n ) be a random sample from the
More informationHYPOTHESIS TESTING: FREQUENTIST APPROACH.
HYPOTHESIS TESTING: FREQUENTIST APPROACH. These notes summarize the lectures on (the frequentist approach to) hypothesis testing. You should be familiar with the standard hypothesis testing from previous
More informationINTERVAL ESTIMATION AND HYPOTHESES TESTING
INTERVAL ESTIMATION AND HYPOTHESES TESTING 1. IDEA An interval rather than a point estimate is often of interest. Confidence intervals are thus important in empirical work. To construct interval estimates,
More informationIntroductory Econometrics
Session 4 - Testing hypotheses Roland Sciences Po July 2011 Motivation After estimation, delivering information involves testing hypotheses Did this drug had any effect on the survival rate? Is this drug
More informationTheory of Maximum Likelihood Estimation. Konstantin Kashin
Gov 2001 Section 5: Theory of Maximum Likelihood Estimation Konstantin Kashin February 28, 2013 Outline Introduction Likelihood Examples of MLE Variance of MLE Asymptotic Properties What is Statistical
More informationTranslation Invariant Experiments with Independent Increments
Translation Invariant Statistical Experiments with Independent Increments (joint work with Nino Kordzakhia and Alex Novikov Steklov Mathematical Institute St.Petersburg, June 10, 2013 Outline 1 Introduction
More informationparameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X).
4. Interval estimation The goal for interval estimation is to specify the accurary of an estimate. A 1 α confidence set for a parameter θ is a set C(X) in the parameter space Θ, depending only on X, such
More informationMaster s Written Examination
Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth
More informationSolution: First note that the power function of the test is given as follows,
Problem 4.5.8: Assume the life of a tire given by X is distributed N(θ, 5000 ) Past experience indicates that θ = 30000. The manufacturere claims the tires made by a new process have mean θ > 30000. Is
More information