Course information: Instructor: Tim Hanson, Leconte 219C, phone Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment.

Similar documents
Stat 704 Data Analysis I Probability Review

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

Lecture 2: Review of Probability

Analysis of Experimental Designs

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

SOME SPECIFIC PROBABILITY DISTRIBUTIONS. 1 2πσ. 2 e 1 2 ( x µ

Probability Theory and Statistics. Peter Jochumzen

Random Variables and Their Distributions

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2

STAT Chapter 5 Continuous Distributions

STATISTICS 3A03. Applied Regression Analysis with SAS. Angelo J. Canty

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Preliminary Statistics Lecture 3: Probability Models and Distributions (Outline) prelimsoas.webs.com

ECON Fundamentals of Probability

Lecture 1: August 28

Will Landau. Feb 28, 2013

Stat 5101 Notes: Algorithms (thru 2nd midterm)

Topic 2: Probability & Distributions. Road Map Probability & Distributions. ECO220Y5Y: Quantitative Methods in Economics. Dr.

Chapter 4 Multiple Random Variables

Introduction and Overview STAT 421, SP Course Instructor

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Preliminary Statistics. Lecture 3: Probability Models and Distributions

Stat 5101 Notes: Algorithms

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

1 Probability theory. 2 Random variables and probability theory.

STAT 4385 Topic 01: Introduction & Review

COMPSCI 240: Reasoning Under Uncertainty

Semester , Example Exam 1

7 Random samples and sampling distributions

f X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx

CHAPTER 6 SOME CONTINUOUS PROBABILITY DISTRIBUTIONS. 6.2 Normal Distribution. 6.1 Continuous Uniform Distribution

Sampling Distributions of Statistics Corresponds to Chapter 5 of Tamhane and Dunlop

HT Introduction. P(X i = x i ) = e λ λ x i

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

II. The Normal Distribution

Multivariate Regression

Continuous random variables

Review: mostly probability and some statistics

Lecture 2: Repetition of probability theory and statistics

Continuous Random Variables

Review of probability and statistics 1 / 31

Recitation 2: Probability

STAT 512 sp 2018 Summary Sheet

Multivariate Analysis and Likelihood Inference

Section 8.1. Vector Notation

STAT 430/510: Lecture 10

First Year Examination Department of Statistics, University of Florida

Def 1 A population consists of the totality of the observations with which we are concerned.

Statistics and Sampling distributions

Chapter 5 continued. Chapter 5 sections

Probability. Paul Schrimpf. January 23, Definitions 2. 2 Properties 3

Chapter 11 Sampling Distribution. Stat 115

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 5 Class Notes

Stat 427/527: Advanced Data Analysis I

Mathematical statistics

BASICS OF PROBABILITY

LECTURE 1. Introduction to Econometrics

IV. The Normal Distribution

UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables

Asymptotic Statistics-VI. Changliang Zou

ECE Lecture #9 Part 2 Overview

Math Review Sheet, Fall 2008

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

ENGG2430A-Homework 2

Limiting Distributions

Lecture 21: Convergence of transformations and generating a random variable

STAT 509 Section 3.4: Continuous Distributions. Probability distributions are used a bit differently for continuous r.v. s than for discrete r.v. s.

The Multivariate Normal Distribution 1

Probability and Statistics Notes

Review of Statistics I

Topics in Probability and Statistics

Simple Linear Regression

Continuous Random Variables

Introduction to Simple Linear Regression

Intro to Probability Instructor: Alexandre Bouchard

Bivariate Distributions

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Bivariate distributions

Probability and Distributions

Linear models and their mathematical foundations: Simple linear regression

IV. The Normal Distribution

HW5 Solutions. (a) (8 pts.) Show that if two random variables X and Y are independent, then E[XY ] = E[X]E[Y ] xy p X,Y (x, y)

STA 4322 Exam I Name: Introduction to Statistics Theory

Probability and Stochastic Processes

Contents 1. Contents

ECE302 Exam 2 Version A April 21, You must show ALL of your work for full credit. Please leave fractions as fractions, but simplify them, etc.

STA 260: Statistics and Probability II

Qualifying Exam in Probability and Statistics.

IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES

Homework for 1/13 Due 1/22

Test Problems for Probability Theory ,

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

1: PROBABILITY REVIEW

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment:

2.1 Linear regression with matrices

STT 441 Final Exam Fall 2013

Graduate Econometrics I: Asymptotic Theory

2008 Winton. Statistical Testing of RNGs

Statistics 1B. Statistics 1B 1 (1 1)

Transcription:

Course information: Instructor: Tim Hanson, Leconte 219C, phone 777-3859. Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment. Text: Applied Linear Statistical Models (5th Edition), by Kutner, Nachtsheim, Neter, and Li. Online notes at http://www.stat.sc.edu/ hansont/stat704/stat704.html based on David Hitchcock s notes and the text. Grading, et cetera: see syllabus. Stat 704 has a co-requisite of Stat 712 (Casella & Berger level mathematical statistics). You need to be taking this, or have taken this already. 1

Section A.3 Random Variables def n: A random variable is defined as a function that maps an outcome from some random phenomenon to a real number. More formally, a random variable is a map or function from the sample space of an experiment, S, to some subset of the real numbers R R. Restated: A random variable measures the result of a random phenomenon. Example 1: The height Y of a randomly selected University of South Carolina statistics graduate student. Example 2: The number of car accidents Y in a month at the intersection of Assembly and Gervais. 2

Every random variable has a cumulative distribution function (cdf) associated with it: F(y) = P(Y y). Discrete random variables have a probability mass function (pmf) f(y) = P(Y = y) = F(y) F(y ) = F(y) lim x y F(x). Continuous random variables have a probability density function (pdf) such that for a < b P(a Y b) = b a f(y)dy. For continuous random variables, f(y) = F (y). Question: Are the two examples on the previous slide continuous or discrete? 3

Expected value (Casella & Berger 2.3, 2.3) The expected value, or mean of a random variable is, in general, defined as E(Y ) = y df(y). For discrete random variables this is E(Y ) = y f(y). y:f(y)>0 For continuous random variables this is E(Y ) = y f(y)dy. 4

Note: If a and c are constants, E(a + cy ) = a + ce(y ). In particular, E(a) = a E(cY ) = ce(y ) E(Y + a) = E(Y ) + a 5

Variance The variance of a random variable measures the spread of its probability distribution. It is the expected squared deviation about the mean: var(y ) = E{[Y E(Y )] 2 }. Equivalently, var(y ) = E(Y 2 ) [E(Y )] 2. Note: If a and c are constants, var(a + cy ) = c 2 var(y ). In particular, var(a) = 0 var(cy ) = c 2 var(y ) var(y + a) = var(y ) Note: The standard deviation of Y is sd(y ) = var(y ). 6

Example: Suppose Y is the high temperature in Celsius of a September day in Seattle. Say E(Y ) = 20 and var(y ) = 10. Let W be the high temperature in Fahrenheit. Then ( ) 9 E(W) = E 5 Y + 32 = 9 5 E(Y ) + 32 = 9 20 + 32 = 68 degrees. 5 var(w) = var ( ) 9 5 Y + 32 = ( ) 2 9 var(y ) = 3.24(10) = 32.4 degrees 2. 5 sd(y ) = var(y ) = 32.4 = 5.7 degrees. 7

Covariance (C & B Section 2.5) For two random variables Y and Z, the covariance of Y and Z is cov(y, Z) = E{[Y E(Y )][Z E(Z)]}. Note cov(y, Z) = E(Y Z) E(Y )E(Z). If Y and Z have positive covariance, lower values of Y tend to correspond to lower values of Z (and large values of Y with large values of Z). Example: X is work experience in years and Y is salary in Euro. If Y and Z have negative covariance, lower values of Y tend to correspond to higher values of Z and vice-versa. Example: X is the weight of a car in tons and Y is miles per gallon. 8

If a 1, c 1, a 2, c 2 are constants, cov(a 1 + c 1 Y, a 2 + c 2 Z) = c 1 c 2 cov(y, Z). Note: by definition cov(y, Z) = var(y ). The correlation coefficient between Y and Z is the covariance scaled to be between 1 and 1: corr(y, Z) = cov(y, Z) var(y )var(z). If corr(y, Z) = 0 then Y and Z are uncorrelated. 9

Independent random variables (C & B 4.2) Informally, two random variables Y and Z are independent if knowing the value of one random variable does not affect the probability distribution of the other random variable. Note: If Y and Z are independent, then Y and Z are uncorrelated, corr(y, Z) = 0. However, corr(y, Z) = 0 does not imply independence in general. If Y and Z have a bivariate normal distribution then cov(y, Z) = 0 Y, Z independent. Question: what is the formal definition of independence for (Y, Z)? 10

Linear combinations of random variables Suppose Y 1, Y 2,...,Y n are random variables and a 1, a 2,...,a n are constants. Then [ n ] n E a i Y i = a i E(Y i ). That is, i=1 i=1 E [a 1 Y 1 + a 2 Y 2 + + a n Y n ] = a 1 E(Y 1 ) + a 2 E(Y 2 ) + + a n E(Y n ). Also, [ n ] var a i Y i = i=1 n i=1 n a i a j cov(y i, Y j ). j=1 11

For two random variables E(a 1 Y 1 + a 2 Y 2 ) = a 1 E(Y 1 ) + a 2 E(Y 2 ), var(a 1 Y 1 + a 2 Y 2 ) = a 2 1var(Y 1 ) + a 2 2var(Y 2 ) + 2a 1 a 2 cov(y 1, Y 2 ). Note: if Y 1,...,Y n are all independent (or even just uncorrelated), then [ n ] n var a i Y i = a 2 ivar(y i ). i=1 i=1 Also, if Y 1,...,Y n are all independent, then ( n cov a i Y i, i=1 ) n c i Y i = i=1 n a i c i var(y i ). i=1 12

Important example: Suppose Y 1,...,Y n are independent random variables, each with mean µ and variance σ 2. Define the sample mean as Ȳ = 1 n n i=1 Y i. Then ( 1 E(Ȳ ) = E n Y 1 + + 1 ) n Y n = 1 n E(Y 1) + + 1 n E(Y n) = 1 n µ + + 1 n µ ( ) 1 = n n µ = µ. 13

var(ȳ ) = var ( 1 n Y 1 + + 1 n Y n ) (C & B p. 212 214) = 1 n 2 var(y 1) + + 1 n 2 var(y n) ( ) 1 = (n) n 2 σ2 = σ2 n. 14

The Central Limit Theorem takes this a step further. When Y 1,...,Y n are independent and identically distributed (i.e. a random sample) from any distribution such that E(Y i ) = µ and var(y ) = σ 2, and n is reasonably large, Ȳ N ( µ, σ 2 n ), where is read as approximately distributed as. Note that E(Ȳ ) = µ and var(ȳ ) = σ2 n CLT slaps normality onto Ȳ. as on the previous slide. The Formally, the CLT states n(ȳ µ) D N(0, σ 2 ). (C & B pp. 236 240) 15

Section A.4 Gaussian & related distributions Normal distribution (C & B pp. 102 106) A random variable Y has a normal distribution with mean µ and standard deviation σ, denoted Y N(µ, σ 2 ), if it has the pdf { 1 f(y) = exp 1 ( ) } 2 y µ, 2πσ 2 2 σ for < y <. Here, µ R and σ > 0. Note: If Y N(µ, σ 2 ) then Z = Y µ σ standard normal distribution. N(0, 1) is said to have a 16

Note: If a and c are constants and Y N(µ, σ 2 ), then a + cy N(a + cµ, c 2 σ 2 ). Note: If Y 1,...,Y n are independent normal such that Y i N(µ i, σi 2) and a 1,...,a n are constants, then ( n n ) n a i Y i = a 1 Y 1 + + a n Y n N a i µ i, a 2 i σi 2. i=1 i=1 i=1 Example: Suppose Y 1,...,Y n are iid from N(µ, σ 2 ). Then ) Ȳ N (µ, σ2. n (C & B p. 215) 17

Distributions related to normal sampling (C & B 5.3) Chi-square distribution def n: If Z 1,...,Z ν iid N(0, 1), then X = Z 2 1 + + Z 2 ν χ 2 ν, chi-square with ν degrees of freedom. Note: E(X) = ν & var(x) = 2ν. Plot of χ 2 1, χ 2 2, χ 2 3, χ 2 4 PDFs: 0.5 0.4 0.3 0.2 0.1 2 4 6 8 10 18

t distribution def n: If Z N(0, 1) independent of X 2 χ 2 ν then T = Z X/ν t ν, t with ν degrees of freedom. Note that E(T) = 0 for ν 2 and var(t) = ν ν 2 t 1, t 2, t 3, t 4 PDFs: for ν 3. 0.35 0.3 0.25 0.2 0.15 0.1 0.05-4 -2 2 4 19

F distribution def n: If X 1 χ 2 ν 1 independent of X 2 χ 2 ν 2 then F = X 1/ν 1 X 2 /ν 2 F ν1,ν 2, F with ν 1 degrees of freedom in the numerator and ν 2 degrees of freedom in the denominator. Note: The square of a t ν random variable is an F 1,ν random variable. Proof: [ ] 2 t 2 Z ν = = Z2 χ 2 ν /ν χ 2 ν/ν = χ2 1/1 χ 2 ν/ν = F 1,ν. Note: E(F) = ν 2 /(ν 2 2) for ν 2 > 2. Variance is function of ν 1 and ν 2 and a bit more complicated. Question: If F F(ν 1, ν 2 ), what is F 1 distributed as? 20

F 2,2, F 5,5, F 5,20, F 5,200 PDFs: 1 0.8 0.6 0.4 0.2 1 2 3 4 21

Section A.6 normal population inference A model for a single sample Suppose we have a random sample Y 1,...,Y n of observations from a normal distribution with unknown mean µ and unknown variance σ 2. We can model these data as Y i = µ + ɛ i, i = 1,...,n, where ɛ i N(0, σ 2 ). Often we wish to obtain inference for the unknown population mean µ, e.g. a confidence interval for µ or hypothesis test H 0 : µ = µ 0. 22

Let s 2 = 1 n n 1 i=1 (Y i Ȳ ) 2 be the sample variance and s = s 2 be the sample standard deviation. (n 1)s Fact: 2 σ = 1 n 2 σ 2 i=1 (Y i Ȳ ) 2 has a χ 2 n 1 distribution (easy to show using results from linear models). Fact: Ȳ µ σ/ n has a N(0, 1) distribution. Fact: Ȳ is independent of s 2. So then any function of Ȳ is independent of any function of s 2. Therefore [ Ȳ µ 1 σ 2 σ/ n ] n i=1 (Y i Ȳ )2 n 1 = Ȳ µ s/ n t n 1. (C & B Theorem 5.3.1, p. 218) 23

Let 0 < α < 1, typically α = 0.05. Let t n 1 (1 α/2) be such that P(T t n 1 ) = 1 α/2 for T t n 1. t n 1 quantiles t n 1 density 1 α α 2 α 2 t n 1 (1 α 2) 0 t n 1 (1 α 2) 24

Under the model Y i = µ + ɛ i, i = 1,...,n, where ɛ i N(0, σ 2 ), 1 α = P = P = P ( t n 1 (1 α/2) Ȳ µ ( s n t n 1 (1 α/2) Ȳ µ ( Ȳ ) s/ n t n 1(1 α/2) s n t n 1 (1 α/2) s t n 1 (1 α/2) µ Ȳ + s ) t n 1 (1 α/2) n n ) 25

So a (1 α)100% random probability interval for µ is Ȳ ± t n 1 (1 α/2) s n where t n 1 (1 α/2) is the (1 α/2)th quantile of a t n 1 random variable: i.e. the value such that P(T < t n 1 (1 α/2)) = 1 α/2 where T t n 1. This, of course, turns into a confidence interval after Ȳ = ȳ and s 2 are observed, and no longer random. 26