Confidence intervals for parameters of normal distribution.
|
|
- Jared Phillips
- 6 years ago
- Views:
Transcription
1 Lecture 5 Confidence intervals for parameters of normal distribution. Let us consider a Matlab example based on the dataset of body temperature measurements of 30 individuals from the article []. The dataset can be downloaded from the journal s website. This dataset was derived from the article []. First of all, if we use dfittool to fit a normal distribution to this data we get a pretty good approximation, see figure body temperature normal fit body temperature normal fit Density Cumulative probability Data Data Figure 5.: Fitting a body temperature dataset. (a) Histogram of the data and p.d.f. of fitted normal distribution; (b) Empirical c.d.f. and c.d.f. of fitted normal distribution. The tool also outputs the following MLEstimates ˆµ and ˆα of parameters µ, α of normal distribution: Parameter Estimate Std. Err. mu sigma
2 Also, if our dataset vector name is normtemp then using the matlab function normfit by typing [mu,sigma,muint,sigmaint]=normfit(normtemp) outputs the following: mu = 98.49, sigma = 0.733, muint = [98., ], sigmaint = [0.654, 0.835]. The last two intervals here are 95% confidence intervals for parameters µ and α. This means that not only we are able to estimate the parameters of normal distribution using MLE but also to garantee with confidence 95% that the true unknown parameters of the distribution belong to these confidence intervals. How this is done is the topic of this lecture. Notice that conventional normal temperature 98.6 does not fall into the estimated 95% confidence interval [98., ]. Distribution of the estimates of parameters of normal distribution. Let us consider a sample X,..., X n N(µ, α ) from normal distribution with mean µ and variance α. MLE gave us the following estimates of µ and α - µˆ = X and ˆα = X (X ). The question is: how close are these estimates to actual values of the unknown parameters µ and α? By LLN we know that these estimates converge to µ and α, X µ, X (X ) α, n, but we will try to describe precisely how close X and X (X ) are to µ and α. We will start by studying the following question: What is the joint distribution of ( X, X ( X) ) when X,..., X n are i.i.d from N(0, )? A similar question for a sample from a general normal distribution N(µ, α ) can be reduced to this one by renormalizing Z i = (X i µ)/α. We will need the following definition. Definition. If X,..., X n are i.i.d. standard normal then the distribution of X X n is called the χ n-distribution (chi-squared distribution) with n degrees of freedom. We will find the p.d.f. of this distribution in the following lectures. At this point we only need to note that this distribution does not depend on any parameters besides degrees of freedom n and, therefore, could be tabulated even if we were not able to find the explicit formula for its p.d.f. Here is the main result that will allow us to construct confidence intervals for parameters of normal distribution as in the Matlab example above. Theorem. If X,..., X n are i.d.d. standard normal, then sample mean X and sample variance X (X ) are independent, nx N(0, ) and n( X (X ) ) χ n, i.e. nx has standard normal distribution and n(x (X ) ) has χ (n ) degrees of freedom. n distribution with 8
3 Proof. Consider a vector Y given by a specific orthogonal transformation of X: Y n n X Y = = V X=..?... Y n Here we choose a first row of the matrix V to be equal to v =,..., n n and let the remaining rows be any vectors such that the matrix V defines orthogonal transformation. This can be done since the length of the first row vector v =, and we can simply choose the rows v,..., v n to be any orthogonal basis in the hyperplane orthogonal to vector v. Let us discuss some properties of this particular transformation. First of all, we showed above that Y,..., Y n are also i.i.d. standard normal. Because of the particular choice of the first row v in V, the first r.v. Y = n X n X n = nx X n and, therefore, X = Y. (5.0.) n Next, n times sample variance can be written as n(x (X ) ) = X X n (X X n ) n = X X n Y. The orthogonal transformation V preserves the length of X, i.e. Y = V X = X or and, therefore, we get Y + + Y n = X + + X n n(x (X ) ) = Y Y Y = Y Y. (5.0.) n n Equations (5.0.) and (5.0.) show that sample mean and sample variance are independent since Y and (Y,..., Y n ) are independent, nx = Y has standard normal distribution and n(x (X ) ) has χ n distribution since Y,..., Y n are independent standard normal. Let us write down the implications of this result for a general normal distribution: X,..., X n N(µ, α ). 9
4 In this case, we know that Z = X µ,, Zn = X n µ N(0, ) α α are independent standard normal. Theorem applied to Z,..., Z n gives that n nz = n X i µ = n(x µ) N(0, ) n α α i= and n(z (Z ) ) = n Xi µ X i µ n α n α n Xi µ X i µ = n n i= α n α = n X (X ) χ n. α We proved that MLE ˆµ = X and ˆα = X (X ) are independent and n(ˆµ µ) N(0, ), nσˆ χ σ σ n. Confidence intervals for parameters of normal distribution. We know that by LLN a sample mean µˆ and sample variance ˆα converge to mean µ and variance α : µˆ = X µ, αˆ = X (X ) α. In other words, these estimates are consistent. Based on the above description of the joint distribution of the estimates, we will give a precise quantitative description of how close ˆµ and ˆα are to the unknown parameters µ and α. Let us start by giving a definition of a confidence interval in our usual setting when we observe a sample X,..., X n with distribution P θ0 from a parametric family {P θ : θ Θ}, and θ 0 is unknown. Definition: Given a confidence level parameter α [0, ], if there exist two statistics such that probability S = S (X,..., X n ) and S = S (X,..., X n ) P θ0 (S θ 0 S ) = α ( or α) then we will call [S, S ] a confidence interval for the unknown parameter θ 0 with the confidence level α. 30
5 This definition means that we can garantee with probability/confidence α that our unknown parameter lies within the interval [S, S ]. We will now show how in the case of a normal distribution N(µ, α ) we can construct confidence intervals for unknown µ and α. Let us recall that in the last lecture we proved that if then X,..., X n are i.d.d. with distribution N(µ, α ) A = n(ˆµ µ) nαˆ N(0, ) and B = α α χ n and the random variables A and B are independent. If we recall the definition of χ distribution, this means that we can represent A and B as for some Y,..., Y n - i.d.d. standard normal. A = Y and B = Y Y n Tails of χ n -distribution α α 0 0 c 5 0 c lacements Figure 5.: p.d.f. of χ n -distribution and α-confidence interval. First, let us consider p.d.f. of χ n distribution (see figure 5.) and choose points c and c so that the area in each tail is ( α)/. Then the area between c and c is α which means that P(c B c ) = α. Therefore, we can garantee with probability α that nαˆ c α c. 3
6 Solving this for α gives This precisely means that the interval nαˆ c nαˆ α. c nαˆ nαˆ ], c is the α-confidence interval for the unknown variance α. Next, let us construct the confidence interval for the mean µ. We will need the following definition. Definition. If Y 0, Y,..., Y n are i.i.d. standard normal then the distribution of the random variable Y 0 (Y Y ) n n is called (Student) t n -distribution with n degrees of freedom. We will find the p.d.f. of this distribution in the following lectures together with p.d.f. of χ -distribution and some others. At this point we only note that this distribution does not depend on any parameters besides degrees of freedom n and, therefore, it can be tabulated. Consider the following expression: c A Y = B n n (Y Y n ) t n which, by definition, has t n -distribution with n degrees of freedom. On the other hand, A B n = n (ˆµ µ) / α n nαˆ α n = (ˆµ µ). αˆ If we now look at the p.d.f. of t n distribution (see figure 5.3) and choose the constants c and c so that the area in each tail is ( α)/, (the constant is the same on each side because the distribution is symmetric) we get that with probability α, n c (ˆµ µ) c αˆ and solving this for µ, we get the confidence interval αˆ αˆ µˆ c µ µˆ + c. n n Example. (Textbook, Section 7.5, p. 4)) Consider a sample of size n = 0 from normal distribution with unknown parameters: 0.86,.53,.57,.8, 0.99,.09,.9,.78,.9,.58. 3
7 0.4 Tails of t n -distribution α α c 0 c 4 6 ements Figure 5.3: p.d.f. of t n distribution and confidence interval for µ. We compute the estimates µˆ = X =.379 and ˆα = X (X ) = Let us choose confidence level α = 95% = We have to find c, c and c as explained above. Using the table for t 9 -distribution we need to find c such that t 9 (, c) = which gives us c =.6. To find c and c we have to use the χ 9-distribution table so that χ ([0, c ]) = 0.05 c =.7 9 χ ([0, c ]) = c = Plugging these into the formulas above, with probability 95% we can garantee that X c (X (X ) 9 ) µ X + c (X (X ) 9 ).446 µ.634 and with probability 95% we can garantee that or n(x (X ) ) n(x (X ) ) α c α c 33
8 These confidence intervals may not look impressive but the sample size is very small here, n = 0. References. [] Allen L.Shoemaker (996), What s Normal? - Temperature, Gender, and Heart Rate. Journal of Statistics Education, v.4, n.. [] Mackowiak, P. A., Wasserman, S. S., and Levine, M. M. (99), A Critical Appraisal of 98.6 Degrees F, the Upper Limit of the Normal Body Temperature, and Other Legacies of Carl Reinhold August Wunderlich. Journal of the American Medical Association, 68,
7.3 The Chi-square, F and t-distributions
7.3 The Chi-square, F and t-distributions Ulrich Hoensch Monday, March 25, 2013 The Chi-square Distribution Recall that a random variable X has a gamma probability distribution (X Gamma(r, λ)) with parameters
More informationLecture 3: Statistical sampling uncertainty
Lecture 3: Statistical sampling uncertainty c Christopher S. Bretherton Winter 2015 3.1 Central limit theorem (CLT) Let X 1,..., X N be a sequence of N independent identically-distributed (IID) random
More information1 Introduction to Estimation
STT 430/630/ES 760 Lecture Notes: Chapter 5: Estimation 1 February 23, 2009 Chapter 5: Estimation The probability distributions such as the normal, exponential, or binomial are defined in terms of parameters
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will
More informationStatistics 3858 : Maximum Likelihood Estimators
Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationINTERVAL ESTIMATION AND HYPOTHESES TESTING
INTERVAL ESTIMATION AND HYPOTHESES TESTING 1. IDEA An interval rather than a point estimate is often of interest. Confidence intervals are thus important in empirical work. To construct interval estimates,
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationChapter 9: Hypothesis Testing Sections
Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses 9.2 Testing Simple Hypotheses 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the Means of Two
More information14.30 Introduction to Statistical Methods in Economics Spring 2009
MIT OpenCourseWare http://ocw.mit.edu 4.0 Introduction to Statistical Methods in Economics Spring 009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationStatistical Inference: Estimation and Confidence Intervals Hypothesis Testing
Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire
More informationMath 494: Mathematical Statistics
Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/
More information1 General problem. 2 Terminalogy. Estimation. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ).
Estimation February 3, 206 Debdeep Pati General problem Model: {P θ : θ Θ}. Observe X P θ, θ Θ unknown. Estimate θ. (Pick a plausible distribution from family. ) Or estimate τ = τ(θ). Examples: θ = (µ,
More informationSome Assorted Formulae. Some confidence intervals: σ n. x ± z α/2. x ± t n 1;α/2 n. ˆp(1 ˆp) ˆp ± z α/2 n. χ 2 n 1;1 α/2. n 1;α/2
STA 248 H1S MIDTERM TEST February 26, 2008 SURNAME: SOLUTIONS GIVEN NAME: STUDENT NUMBER: INSTRUCTIONS: Time: 1 hour and 50 minutes Aids allowed: calculator Tables of the standard normal, t and chi-square
More informationLecture 12: Small Sample Intervals Based on a Normal Population Distribution
Lecture 12: Small Sample Intervals Based on a Normal Population MSU-STT-351-Sum-17B (P. Vellaisamy: MSU-STT-351-Sum-17B) Probability & Statistics for Engineers 1 / 24 In this lecture, we will discuss (i)
More informationContinuous random variables
Continuous random variables Can take on an uncountably infinite number of values Any value within an interval over which the variable is definied has some probability of occuring This is different from
More informationLecture 10: Generalized likelihood ratio test
Stat 200: Introduction to Statistical Inference Autumn 2018/19 Lecture 10: Generalized likelihood ratio test Lecturer: Art B. Owen October 25 Disclaimer: These notes have not been subjected to the usual
More informationWeek 2: Review of probability and statistics
Week 2: Review of probability and statistics Marcelo Coca Perraillon University of Colorado Anschutz Medical Campus Health Services Research Methods I HSMP 7607 2017 c 2017 PERRAILLON ALL RIGHTS RESERVED
More information(Refer Slide Time: 0:36)
Probability and Random Variables/Processes for Wireless Communications. Professor Aditya K Jagannatham. Department of Electrical Engineering. Indian Institute of Technology Kanpur. Lecture -15. Special
More informationSYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions
SYSM 6303: Quantitative Introduction to Risk and Uncertainty in Business Lecture 4: Fitting Data to Distributions M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu
More informationWill Landau. Feb 28, 2013
Iowa State University The F Feb 28, 2013 Iowa State University Feb 28, 2013 1 / 46 Outline The F The F Iowa State University Feb 28, 2013 2 / 46 The normal (Gaussian) distribution A random variable X is
More informationSOME SPECIFIC PROBABILITY DISTRIBUTIONS. 1 2πσ. 2 e 1 2 ( x µ
SOME SPECIFIC PROBABILITY DISTRIBUTIONS. Normal random variables.. Probability Density Function. The random variable is said to be normally distributed with mean µ and variance abbreviated by x N[µ, ]
More informationChapter 10. Hypothesis Testing (I)
Chapter 10. Hypothesis Testing (I) Hypothesis Testing, together with statistical estimation, are the two most frequently used statistical inference methods. It addresses a different type of practical problems
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationEconomics 573 Problem Set 4 Fall 2002 Due: 20 September
Economics 573 Problem Set 4 Fall 2002 Due: 20 September 1. Ten students selected at random have the following "final averages" in physics and economics. Students 1 2 3 4 5 6 7 8 9 10 Physics 66 70 50 80
More informationIEOR 165 Lecture 7 1 Bias-Variance Tradeoff
IEOR 165 Lecture 7 Bias-Variance Tradeoff 1 Bias-Variance Tradeoff Consider the case of parametric regression with β R, and suppose we would like to analyze the error of the estimate ˆβ in comparison to
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More information7.1 Basic Properties of Confidence Intervals
7.1 Basic Properties of Confidence Intervals What s Missing in a Point Just a single estimate What we need: how reliable it is Estimate? No idea how reliable this estimate is some measure of the variability
More informationChapter 8 of Devore , H 1 :
Chapter 8 of Devore TESTING A STATISTICAL HYPOTHESIS Maghsoodloo A statistical hypothesis is an assumption about the frequency function(s) (i.e., PDF or pdf) of one or more random variables. Stated in
More informationMcGill University. Faculty of Science. Department of Mathematics and Statistics. Part A Examination. Statistics: Theory Paper
McGill University Faculty of Science Department of Mathematics and Statistics Part A Examination Statistics: Theory Paper Date: 10th May 2015 Instructions Time: 1pm-5pm Answer only two questions from Section
More informationStatistics: Learning models from data
DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial
More informationThe t-distribution. Patrick Breheny. October 13. z tests The χ 2 -distribution The t-distribution Summary
Patrick Breheny October 13 Patrick Breheny Biostatistical Methods I (BIOS 5710) 1/25 Introduction Introduction What s wrong with z-tests? So far we ve (thoroughly!) discussed how to carry out hypothesis
More informationIntroduction to Probability and Statistics (Continued)
Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:
More informationContinuous Distributions
Chapter 3 Continuous Distributions 3.1 Continuous-Type Data In Chapter 2, we discuss random variables whose space S contains a countable number of outcomes (i.e. of discrete type). In Chapter 3, we study
More informationIntroduction to Probability and Statistics (Continued)
Introduction to Probability and Statistics (Continued) Prof. icholas Zabaras Center for Informatics and Computational Science https://cics.nd.edu/ University of otre Dame otre Dame, Indiana, USA Email:
More informationNormal (Gaussian) distribution The normal distribution is often relevant because of the Central Limit Theorem (CLT):
Lecture Three Normal theory null distributions Normal (Gaussian) distribution The normal distribution is often relevant because of the Central Limit Theorem (CLT): A random variable which is a sum of many
More informationNon-parametric Inference and Resampling
Non-parametric Inference and Resampling Exercises by David Wozabal (Last update. Juni 010) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend surfing
More informationStatistics 3657 : Moment Approximations
Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the
More informationChapter 6: Large Random Samples Sections
Chapter 6: Large Random Samples Sections 6.1: Introduction 6.2: The Law of Large Numbers Skip p. 356-358 Skip p. 366-368 Skip 6.4: The correction for continuity Remember: The Midterm is October 25th in
More informationSTAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015
STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis
More informationChapter 8: Sampling distributions of estimators Sections
Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p.
More informationSampling Distributions of Statistics Corresponds to Chapter 5 of Tamhane and Dunlop
Sampling Distributions of Statistics Corresponds to Chapter 5 of Tamhane and Dunlop Slides prepared by Elizabeth Newton (MIT), with some slides by Jacqueline Telford (Johns Hopkins University) 1 Sampling
More informationThe Chi-Square Distributions
MATH 183 The Chi-Square Distributions Dr. Neal, WKU The chi-square distributions can be used in statistics to analyze the standard deviation σ of a normally distributed measurement and to test the goodness
More informationStatistical Inference
Statistical Inference Classical and Bayesian Methods Class 5 AMS-UCSC Tue 24, 2012 Winter 2012. Session 1 (Class 5) AMS-132/206 Tue 24, 2012 1 / 11 Topics Topics We will talk about... 1 Confidence Intervals
More informationIntroduction to Statistical Inference
Introduction to Statistical Inference Dr. Fatima Sanchez-Cabo f.sanchezcabo@tugraz.at http://www.genome.tugraz.at Institute for Genomics and Bioinformatics, Graz University of Technology, Austria Introduction
More informationConfidence Intervals Unknown σ
Confidence Intervals Unknown σ Estimate σ Student s t-distribution Step-by-step instructions Example Confidence Intervals - Known σ Standard normal distribution aaad1hicbvjbtxqxfc6sf1xvoi/gzojcgrohm4neetahyqipxlbzmaer6xtozjz0kmnk5cmt8zxn33vf+ap8qf44h/xdedcbzp0eubr951lz8kqkwobx7+nplvnzl+4ohopffnk1wvxz+duvk/1yhdocs1+zcxgqrq0lpcsvhqgwbljmer10p91ufwnrcq017ueg/ziusa8gzreijo5zj6jxfcxe38tuznxgpblz0kiojm7a/xvz+9vpxs7c9m/aa75qarluwr1vzkle07zqzgenybjmqogn9lbwyjqvgjdd81aftoiuiojwba4fyaubxiwnlxr+uwrfpktlhkiqzcqcw9ar7o3jud0jviwukbougdccm4n4kbnglb5dmyry47osmcrpbjogtpo+oxigdtf1ek+nkibrp/giujpa4e4gtskuxe0tpibaemohx3rchh47x57pkjjhcqhavyozu7ssr0nbtrsi1nl5gm47m5c0al5ciff+9pxd/yrtjlbabeonyhkshf/gwdtmrajtbvdpaztj4cuqxipgrdb3mfstkso1kpqamtf6z7cxxt9//7htq6tiqitckcjcndkmnvxxgrybn69ffbeuzrd7qyp0m66sulpohygqp3jlsfixeijndhxxdhkvnnexptpvfhigzzhauzwmo4jwkng55cnkktxu9dgl1kx6tdnzekmm1q6rosrjoqhwsppyqbpeu4m+ua+kx+trzzvfw59oarotx1pbpkj1fr6fqtsik=
More information3 Random Samples from Normal Distributions
3 Random Samples from Normal Distributions Statistical theory for random samples drawn from normal distributions is very important, partly because a great deal is known about its various associated distributions
More informationOne-Sample Numerical Data
One-Sample Numerical Data quantiles, boxplot, histogram, bootstrap confidence intervals, goodness-of-fit tests University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html
More informationBIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation
BIO5312 Biostatistics Lecture 13: Maximum Likelihood Estimation Yujin Chung November 29th, 2016 Fall 2016 Yujin Chung Lec13: MLE Fall 2016 1/24 Previous Parametric tests Mean comparisons (normality assumption)
More informationHomework for 1/13 Due 1/22
Name: ID: Homework for 1/13 Due 1/22 1. [ 5-23] An irregularly shaped object of unknown area A is located in the unit square 0 x 1, 0 y 1. Consider a random point distributed uniformly over the square;
More informationChapter 8 - Statistical intervals for a single sample
Chapter 8 - Statistical intervals for a single sample 8-1 Introduction In statistics, no quantity estimated from data is known for certain. All estimated quantities have probability distributions of their
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationConfidence Intervals, Testing and ANOVA Summary
Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationHT Introduction. P(X i = x i ) = e λ λ x i
MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework
More informationConfidence Intervals for the Sample Mean
Confidence Intervals for the Sample Mean As we saw before, parameter estimators are themselves random variables. If we are going to make decisions based on these uncertain estimators, we would benefit
More informationCHAPTER 6 SOME CONTINUOUS PROBABILITY DISTRIBUTIONS. 6.2 Normal Distribution. 6.1 Continuous Uniform Distribution
CHAPTER 6 SOME CONTINUOUS PROBABILITY DISTRIBUTIONS Recall that a continuous random variable X is a random variable that takes all values in an interval or a set of intervals. The distribution of a continuous
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationTerminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1
Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the
More informationHypothesis Testing One Sample Tests
STATISTICS Lecture no. 13 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 12. 1. 2010 Tests on Mean of a Normal distribution Tests on Variance of a Normal
More informationTables Table A Table B Table C Table D Table E 675
BMTables.indd Page 675 11/15/11 4:25:16 PM user-s163 Tables Table A Standard Normal Probabilities Table B Random Digits Table C t Distribution Critical Values Table D Chi-square Distribution Critical Values
More informationProblem 1 (20) Log-normal. f(x) Cauchy
ORF 245. Rigollet Date: 11/21/2008 Problem 1 (20) f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.0 0.2 0.4 0.6 0.8 4 2 0 2 4 Normal (with mean -1) 4 2 0 2 4 Negative-exponential x x f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.5
More informationDelta Method. Example : Method of Moments for Exponential Distribution. f(x; λ) = λe λx I(x > 0)
Delta Method Often estimators are functions of other random variables, for example in the method of moments. These functions of random variables can sometimes inherit a normal approximation from the underlying
More information401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.
401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis
More informationContents 1. Contents
Contents 1 Contents 6 Distributions of Functions of Random Variables 2 6.1 Transformation of Discrete r.v.s............. 3 6.2 Method of Distribution Functions............. 6 6.3 Method of Transformations................
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More informationBasic Data Analysis. Stephen Turnbull Business Administration and Public Policy Lecture 5: May 9, Abstract
Basic Data Analysis Stephen Turnbull Business Administration and Public Policy Lecture 5: May 9, 2013 Abstract We discuss random variables and probability distributions. We introduce statistical inference,
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationlim F n(x) = F(x) will not use either of these. In particular, I m keeping reserved for implies. ) Note:
APPM/MATH 4/5520, Fall 2013 Notes 9: Convergence in Distribution and the Central Limit Theorem Definition: Let {X n } be a sequence of random variables with cdfs F n (x) = P(X n x). Let X be a random variable
More informationII. The Normal Distribution
II. The Normal Distribution The normal distribution (a.k.a., a the Gaussian distribution or bell curve ) is the by far the best known random distribution. It s discovery has had such a far-reaching impact
More informationStatistical Distributions and Uncertainty Analysis. QMRA Institute Patrick Gurian
Statistical Distributions and Uncertainty Analysis QMRA Institute Patrick Gurian Probability Define a function f(x) probability density distribution function (PDF) Prob [A
More informationSimple Linear Regression
Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.
More informationGeneralized inference for the common location. parameter of several location-scale families
Generalized inference for the common location parameter of several location-scale families Fuqi Chen and Sévérien Nkurunziza Abstract In this paper, we are interested in inference problem concerning the
More information2014/2015 Smester II ST5224 Final Exam Solution
014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationComposite Hypotheses and Generalized Likelihood Ratio Tests
Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve
More informationH 2 : otherwise. that is simply the proportion of the sample points below level x. For any fixed point x the law of large numbers gives that
Lecture 28 28.1 Kolmogorov-Smirnov test. Suppose that we have an i.i.d. sample X 1,..., X n with some unknown distribution and we would like to test the hypothesis that is equal to a particular distribution
More information7 Random samples and sampling distributions
7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces
More informationEC2001 Econometrics 1 Dr. Jose Olmo Room D309
EC2001 Econometrics 1 Dr. Jose Olmo Room D309 J.Olmo@City.ac.uk 1 Revision of Statistical Inference 1.1 Sample, observations, population A sample is a number of observations drawn from a population. Population:
More informationMaximum Likelihood Large Sample Theory
Maximum Likelihood Large Sample Theory MIT 18.443 Dr. Kempthorne Spring 2015 1 Outline 1 Large Sample Theory of Maximum Likelihood Estimates 2 Asymptotic Results: Overview Asymptotic Framework Data Model
More informationF & B Approaches to a simple model
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys
More informationComparing Systems Using Sample Data
Comparing Systems Using Sample Data Dr. John Mellor-Crummey Department of Computer Science Rice University johnmc@cs.rice.edu COMP 528 Lecture 8 10 February 2005 Goals for Today Understand Population and
More informationPart 1.) We know that the probability of any specific x only given p ij = p i p j is just multinomial(n, p) where p k1 k 2
Problem.) I will break this into two parts: () Proving w (m) = p( x (m) X i = x i, X j = x j, p ij = p i p j ). In other words, the probability of a specific table in T x given the row and column counts
More informationInstitute of Actuaries of India
Institute of Actuaries of India Subject CT3 Probability & Mathematical Statistics May 2011 Examinations INDICATIVE SOLUTION Introduction The indicative solution has been written by the Examiners with the
More informationStatistical Data Analysis
DS-GA 0 Lecture notes 8 Fall 016 1 Descriptive statistics Statistical Data Analysis In this section we consider the problem of analyzing a set of data. We describe several techniques for visualizing the
More informationProbability Distributions Columns (a) through (d)
Discrete Probability Distributions Columns (a) through (d) Probability Mass Distribution Description Notes Notation or Density Function --------------------(PMF or PDF)-------------------- (a) (b) (c)
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationSTAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method
STAT 135 Lab 2 Confidence Intervals, MLE and the Delta Method Rebecca Barter February 2, 2015 Confidence Intervals Confidence intervals What is a confidence interval? A confidence interval is calculated
More informationE X A M. Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours. Number of pages incl.
E X A M Course code: Course name: Number of pages incl. front page: 6 MA430-G Probability Theory and Stochastic Processes Date: December 13, 2016 Duration: 4 hours Resources allowed: Notes: Pocket calculator,
More informationNaive Bayes & Introduction to Gaussians
Naive Bayes & Introduction to Gaussians Andreas C. Kapourani 2 March 217 1 Naive Bayes classifier In the previous lab we illustrated how to use Bayes Theorem for pattern classification, which in practice
More informationMachine Learning, Fall 2012 Homework 2
0-60 Machine Learning, Fall 202 Homework 2 Instructors: Tom Mitchell, Ziv Bar-Joseph TA in charge: Selen Uguroglu email: sugurogl@cs.cmu.edu SOLUTIONS Naive Bayes, 20 points Problem. Basic concepts, 0
More informationPost-exam 2 practice questions 18.05, Spring 2014
Post-exam 2 practice questions 18.05, Spring 2014 Note: This is a set of practice problems for the material that came after exam 2. In preparing for the final you should use the previous review materials,
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.
Two hours MATH38181 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer any FOUR
More informationChapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets
Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Confidence sets X: a sample from a population P P. θ = θ(p): a functional from P to Θ R k for a fixed integer k. C(X): a confidence
More informationEmpirical Likelihood Inference for Two-Sample Problems
Empirical Likelihood Inference for Two-Sample Problems by Ying Yan A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics in Statistics
More informationMath 475. Jimin Ding. August 29, Department of Mathematics Washington University in St. Louis jmding/math475/index.
istical A istic istics : istical Department of Mathematics Washington University in St. Louis www.math.wustl.edu/ jmding/math475/index.html August 29, 2013 istical August 29, 2013 1 / 18 istical A istic
More informationMathematical statistics
October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:
More informationHANDBOOK OF APPLICABLE MATHEMATICS
HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume VI: Statistics PART A Edited by Emlyn Lloyd University of Lancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester
More information