Introduction to Simple Linear Regression
|
|
- Catherine Morton
- 5 years ago
- Views:
Transcription
1 Introduction to Simple Linear Regression Yang Feng Yang Feng (Columbia University) Introduction to Simple Linear Regression 1 / 68
2 About me Faculty in the Department of Statistics, Columbia University Operations Research & Financial Engineering PhD, 2010, Princeton University Taught the course over 5 times My research interests. High-dimensional Statistical Learning Network models Nonparametric and semiparametric methods Bioinformatics See my website for more details: Yang Feng (Columbia University) Introduction to Simple Linear Regression 2 / 68
3 Course Description Theory and practice of regression analysis, Simple and multiple regression, including testing, estimation, and confidence procedures, modeling, regression diagnostics and plots, polynomial regression, collinearity and confounding, model selection, geometry of least squares. Extensive use of the computer to analyze data. Course website: Required Text: Applied Linear Regression Models (4th Ed.) Authors: Kutner, Nachtsheim, Neter Yang Feng (Columbia University) Introduction to Simple Linear Regression 3 / 68
4 Course Outline First half of the course is single variable linear regression. Least squares Maximum likelihood, normal model Tests / inferences ANOVA Diagnostics Remedial Measures Yang Feng (Columbia University) Introduction to Simple Linear Regression 4 / 68
5 Course Outline (Continued) Second half of the course is multiple linear regression and other related topics. Multiple linear Regression Linear algebra review Matrix approach to linear regression Multiple predictor variables Diagnostics Tests Model selection Other topics (If time permits) Principle Component Analysis Generalized Linear Models Introduction to Bayesian Inference Yang Feng (Columbia University) Introduction to Simple Linear Regression 5 / 68
6 Requirements Calculus Derivatives, gradients, convexity Linear algebra Matrix notation, inversion, eigenvectors, eigenvalues, rank Probability and Statistics Random variable Expectation, variance Estimation Bias/Variance Basic probability distributions Hypothesis Testing Confidence Interval Yang Feng (Columbia University) Introduction to Simple Linear Regression 6 / 68
7 Software R will be used throughout the course and it is required in all homework. An R tutorial session will be given (Bring your laptop with R and RStudio installed!). Reasons for R: Completely free software. Can be downloaded from RStudio is an easy to use IDE. Available on various systems, PC, MAC, Linux, and even Yang Feng (Columbia University) Introduction to Simple Linear Regression 7 / 68
8 Software R will be used throughout the course and it is required in all homework. An R tutorial session will be given (Bring your laptop with R and RStudio installed!). Reasons for R: Completely free software. Can be downloaded from RStudio is an easy to use IDE. Available on various systems, PC, MAC, Linux, and even iphone and ipad! Advanced yet easy to use. An Introduction to R: R for Data Science: Yang Feng (Columbia University) Introduction to Simple Linear Regression 7 / 68
9 Why regression? Want to model a functional relationship between an predictor variable (input, independent variable, etc.) and a response variable (output, dependent variable, etc.) Examples? But real world is noisy, no f = ma Observation noise Process noise Two distinct goals (Estimation) Understanding the relationship between predictor variables and response variables (Prediction) Predicting the future response given the new observed predictors. Yang Feng (Columbia University) Introduction to Simple Linear Regression 8 / 68
10 History Sir Francis Galton, 19 th century Studied the relation between heights of parents and children and noted that the children regressed to the population mean Regression stuck as the term to describe statistical relations between variables Yang Feng (Columbia University) Introduction to Simple Linear Regression 9 / 68
11 Example Applications Okun s law: The gap version states that for every 1% increase in the unemployment rate, a country s GDP will be roughly an additional 2% lower than its potential GDP. Yang Feng (Columbia University) Introduction to Simple Linear Regression 10 / 68
12 Others Epidemiology Relating lifespan to obesity or smoking habits etc. Science and engineering Relating physical inputs to physical outputs in complex systems. Education and income Relating the income to the length of education. Yang Feng (Columbia University) Introduction to Simple Linear Regression 11 / 68
13 Aims for the course Given something you would like to predict (the response) and the available explanatory variables. What kind of model should you use? Which variables should you include? Which transformations of variables and interaction terms should you use? Yang Feng (Columbia University) Introduction to Simple Linear Regression 12 / 68
14 Aims for the course Given something you would like to predict (the response) and the available explanatory variables. What kind of model should you use? Which variables should you include? Which transformations of variables and interaction terms should you use? Given a model and some data How do you fit the model to the data? How do you express confidence in the values of the model parameters? How do you regularize the model to avoid over-fitting and other related issues? Yang Feng (Columbia University) Introduction to Simple Linear Regression 12 / 68
15 Data for Regression Analysis Observational Data Example: relation between age of employee (X ) and number of days of illness last year (Y ) Cannot be controlled! Yang Feng (Columbia University) Introduction to Simple Linear Regression 13 / 68
16 Data for Regression Analysis Observational Data Example: relation between age of employee (X ) and number of days of illness last year (Y ) Cannot be controlled! Experimental Data Example: an insurance company wishes to study the relation between productivity of its analysts in processing claims (Y ) and length of training X. Treatment: the length of training Experimental Units: the analysts included in the study. Yang Feng (Columbia University) Introduction to Simple Linear Regression 13 / 68
17 Data for Regression Analysis Observational Data Example: relation between age of employee (X ) and number of days of illness last year (Y ) Cannot be controlled! Experimental Data Example: an insurance company wishes to study the relation between productivity of its analysts in processing claims (Y ) and length of training X. Treatment: the length of training Experimental Units: the analysts included in the study. Completely Randomized Design: Most basic type of statistical design Example: same example, but every experimental unit has an equal chance to receive any one of the treatments. Yang Feng (Columbia University) Introduction to Simple Linear Regression 13 / 68
18 Questions? Good time to ask now. Yang Feng (Columbia University) Introduction to Simple Linear Regression 14 / 68
19 Simple Linear Regression Want to find parameters for a function of the form Y i = β 0 + β 1 X i + ɛ i Distribution of error random variable not specified Yang Feng (Columbia University) Introduction to Simple Linear Regression 15 / 68
20 Quick Example - Scatter Plot a b Yang Feng (Columbia University) Introduction to Simple Linear Regression 16 / 68
21 Formal Statement of Model Y i = β 0 + β 1 X i + ɛ i Y i value of the response variable in the i th trial β 0 and β 1 are parameters X i is a known constant, the value of the predictor variable in the i th trial ɛ i is a random error term with mean E(ɛ i ) = 0 and variance Var(ɛ i ) = σ 2 ɛ i and ɛ j are uncorrelated i = 1,..., n Yang Feng (Columbia University) Introduction to Simple Linear Regression 17 / 68
22 Properties The response Y i is the sum of two components Constant term β 0 + β 1 X i Random term ɛ i The expected response is E(Y i ) = E(β 0 + β 1 X i + ɛ i ) = β 0 + β 1 X i + E(ɛ i ) = β 0 + β 1 X i Yang Feng (Columbia University) Introduction to Simple Linear Regression 18 / 68
23 Expectation Review Suppose X has probability density function f (x). Definition E(X ) = xf (x)dx. Linearity property E(aX ) = a E(X ) E(aX + by ) = a E(X ) + b E(Y ) Obvious from definition Yang Feng (Columbia University) Introduction to Simple Linear Regression 19 / 68
24 Example Expectation Derivation Suppose p.d.f of X is Expectation f (x) = 2x, 0 x 1 E(X ) = = = 2x = 2 3 xf (x)dx 2x 2 dx Yang Feng (Columbia University) Introduction to Simple Linear Regression 20 / 68
25 Expectation of a Product of Random Variables If X,Y are random variables with joint density function f (x, y) then the expectation of the product is given by E(XY ) = xyf (x, y)dxdy. XY Yang Feng (Columbia University) Introduction to Simple Linear Regression 21 / 68
26 Expectation of a product of random variables What if X and Y are independent? If X and Y are independent with density functions f 1 and f 2 respectively then E(XY ) = xyf 1 (x)f 2 (y)dxdy XY = xyf 1 (x)f 2 (Y )dxdy X Y = xf 1 (x)[ yf 2 (y)dy]dx X Y = xf 1 (x) E(Y )dx X = E(X ) E(Y ) Yang Feng (Columbia University) Introduction to Simple Linear Regression 22 / 68
27 Regression Function The response Y i comes from a probability distribution with mean E(Y i ) = β 0 + β 1 X i This means the regression function is E(Y ) = β 0 + β 1 X Since the regression function relates the means of the probability distributions of Y for a given X to the level of X Yang Feng (Columbia University) Introduction to Simple Linear Regression 23 / 68
28 Error Terms The response Y i in the i th trial exceeds or falls short of the value of the regression function by the error term amount ɛ i The error terms ɛ i are assumed to have constant variance σ 2 Yang Feng (Columbia University) Introduction to Simple Linear Regression 24 / 68
29 Response Variance Responses Y i have the same constant variance Var(Y i ) = Var(β 0 + β 1 X i + ɛ i ) = Var(ɛ i ) = σ 2 Yang Feng (Columbia University) Introduction to Simple Linear Regression 25 / 68
30 Variance (2 nd central moment) Review Continuous distribution Var(X ) = E[(X E(X )) 2 ] = (x E(X )) 2 f (x)dx Discrete distribution Var(X ) = E[(X E(X )) 2 ] = i (X i E(X )) 2 P(X = X i ) Alternative Form for Variance: Var(X ) = E(X 2 ) E(X ) 2. Yang Feng (Columbia University) Introduction to Simple Linear Regression 26 / 68
31 Example Variance Derivation f (x) = 2x, 0 x 1 Var(X ) = E(X 2 ) E(X ) 2 = 1 0 = 2x = xx 2 dx ( 2 3 )2 = 1 18 Yang Feng (Columbia University) Introduction to Simple Linear Regression 27 / 68
32 Variance Properties Var(aX ) = a 2 Var(X ) Var(aX + by ) = a 2 Var(X ) + b 2 Var(Y ) if X Y Var(a + cx ) = c 2 Var(X ) if a and c are both constants More generally Var( a i X i ) = i a i a j Cov(X i, X j ) j Yang Feng (Columbia University) Introduction to Simple Linear Regression 28 / 68
33 Covariance The covariance between two real-valued random variables X and Y, with expected values E(X ) = µ and E(Y ) = ν is defined as Cov(X, Y ) = E((X µ)(y ν)) = E(XY ) µν. If X Y, Then E(XY ) = E(X ) E(Y ) = µν. Cov(X, Y ) = 0 Yang Feng (Columbia University) Introduction to Simple Linear Regression 29 / 68
34 Formal Statement of Model Y i = β 0 + β 1 X i + ɛ i Y i value of the response variable in the i th trial β 0 and β 1 are parameters X i is a known constant, the value of the predictor variable in the i th trial ɛ i is a random error term with mean E(ɛ i ) = 0 and variance Var(ɛ i ) = σ 2 i = 1,..., n Yang Feng (Columbia University) Introduction to Simple Linear Regression 30 / 68
35 Example An experimenter gave three subjects a very difficult task. Data on the age of the subject (X ) and on the number of attempts to accomplish the task before giving up (Y ) follow: Table: Subject i Age X i Number of Attempts Y i Yang Feng (Columbia University) Introduction to Simple Linear Regression 31 / 68
36 Least Squares Linear Regression Goal: make Y i and b 0 + b 1 X i close for all i. Yang Feng (Columbia University) Introduction to Simple Linear Regression 32 / 68
37 Least Squares Linear Regression Goal: make Y i and b 0 + b 1 X i close for all i. Proposal 1: minimize n i=1 [Y i (b 0 + b 1 X i )]. Proposal 2: minimize n i=1 Y i (b 0 + b 1 X i ). Yang Feng (Columbia University) Introduction to Simple Linear Regression 32 / 68
38 Least Squares Linear Regression Goal: make Y i and b 0 + b 1 X i close for all i. Proposal 1: minimize n i=1 [Y i (b 0 + b 1 X i )]. Proposal 2: minimize n i=1 Y i (b 0 + b 1 X i ). Final Proposal: minimize Q(b 0, b 1 ) = n [Y i (b 0 + b 1 X i )] 2 i=1 Choose b 0 and b 1 as estimators for β 0 and β 1. b 0 and b 1 will minimize the criterion Q for the given sample observations (X 1, Y 1 ), (X 2, Y 2 ),, (X n, Y n ). Yang Feng (Columbia University) Introduction to Simple Linear Regression 32 / 68
39 Guess #1 Yang Feng (Columbia University) Introduction to Simple Linear Regression 33 / 68
40 Guess #2 Yang Feng (Columbia University) Introduction to Simple Linear Regression 34 / 68
41 Function maximization Important technique to remember! Take derivative Set result equal to zero and solve Test second derivative at that point Question: does this always give you the maximum? Going further: multiple variables, convex optimization Yang Feng (Columbia University) Introduction to Simple Linear Regression 35 / 68
42 Function Maximization Find the value of x that maximize the function f (x) = x 2 + log(x), x > 0 Yang Feng (Columbia University) Introduction to Simple Linear Regression 36 / 68
43 Function Maximization Find the value of x that maximize the function f (x) = x 2 + log(x), x > 0 1. Take derivative d dx ( x 2 + log(x)) = 0 (1) 2x + 1 x = 0 (2) 2x 2 = 1 (3) x 2 = 1 2 (4) x = 2 2 (5) Yang Feng (Columbia University) Introduction to Simple Linear Regression 36 / 68
44 2. Check second order derivative d 2 dx 2 ( x 2 + log(x)) = d dx ( 2x + 1 x ) (6) = 2 1 x 2 (7) < 0 (8) Then we have found the maximum. x = 2 2, f (x ) = 1 2 [1 + log(2)]. Yang Feng (Columbia University) Introduction to Simple Linear Regression 37 / 68
45 Least Squares Minimization Function to minimize w.r.t. b 0 and b 1 b 0 and b 1 are called point estimators of β 0 and β 1 respectively Q(b 0, b 1 ) = n [Y i (b 0 + b 1 X i )] 2 i=1 Find partial derivatives and set both equal to zero Q b 0 = 0 Q b 1 = 0 Yang Feng (Columbia University) Introduction to Simple Linear Regression 38 / 68
46 Normal Equations The result of this maximization step are called the normal equations. b 0 and b 1 are called point estimators of β 0 and β 1 respectively. Yi = nb 0 + b 1 Xi (9) Xi Y i = b 0 Xi + b 1 X 2 i (10) This is a system of two equations and two unknowns. Yang Feng (Columbia University) Introduction to Simple Linear Regression 39 / 68
47 Solution to Normal Equations After a lot of algebra one arrives at b 1 = (Xi X )(Y i Ȳ ) (Xi X ) 2 b 0 = Ȳ b 1 X X = Ȳ = Xi n Yi n Yang Feng (Columbia University) Introduction to Simple Linear Regression 40 / 68
48 Least Square Fit Yang Feng (Columbia University) Introduction to Simple Linear Regression 41 / 68
49 Properties of Solution The i th residual is defined to be e i = Y i Ŷ i The sum of the residuals is zero from (9). e i = (Y i b 0 b 1 X i ) i = Y i nb 0 b 1 Xi = 0 The sum of the observed values Y i equals the sum of the fitted values Ŷ i Ŷ i = Y i i i Yang Feng (Columbia University) Introduction to Simple Linear Regression 42 / 68
50 Properties of Solution The sum of the weighted residuals is zero when the residual in the i th trial is weighted by the level of the predictor variable in the i th trial from (10). X i e i = (X i (Y i b 0 b 1 X i )) i = i = 0 X i Y i b 0 Xi b 1 X 2 i Yang Feng (Columbia University) Introduction to Simple Linear Regression 43 / 68
51 Properties of Solution The sum of the weighted residuals is zero when the residual in the i th trial is weighted by the fitted value of the response variable in the i th trial Ŷ i e i = (b 0 + b 1 X i )e i i i = b 0 e i + b 1 X i e i = 0 i i Yang Feng (Columbia University) Introduction to Simple Linear Regression 44 / 68
52 Properties of Solution The regression line always goes through the point X, Ȳ Using the alternative format of linear regression model: Y i = β 0 + β 1 (X i X ) + ɛ i The least squares estimator b 1 for β 1 remains the same as before. The least squares estimator for β 0 = β 0 + β 1 X becomes b 0 = b 0 + b 1 X = (Ȳ b 1 X ) + b 1 X = Ȳ Hence the estimated regression function is Ŷ = Ȳ + b 1 (X X ) Apparently, the regression line always goes through the point ( X, Ȳ ). Yang Feng (Columbia University) Introduction to Simple Linear Regression 45 / 68
53 Estimating Error Term Variance σ 2 Review estimation in non-regression setting. Show estimation results for regression setting. Yang Feng (Columbia University) Introduction to Simple Linear Regression 46 / 68
54 Estimation Review An estimator is a rule that tells how to calculate the value of an estimate based on the measurements contained in a sample i.e. the sample mean Ȳ = 1 n n i=1 Y i Yang Feng (Columbia University) Introduction to Simple Linear Regression 47 / 68
55 Point Estimators and Bias Point estimator ˆθ = f ({Y 1,..., Y n }) Unknown quantity / parameter θ Definition: Bias of estimator Bias(ˆθ) = E(ˆθ) θ Yang Feng (Columbia University) Introduction to Simple Linear Regression 48 / 68
56 Example Samples Estimate the population mean Y i N(θ, σ 2 ) ˆθ = Ȳ = 1 n n i=1 Y i Yang Feng (Columbia University) Introduction to Simple Linear Regression 49 / 68
57 Sampling Distribution of the Estimator First moment E(ˆθ) = E( 1 n = 1 n n i=1 n Y i ) i=1 E(Y i ) = nθ n = θ This is an example of an unbiased estimator Bias(ˆθ) = E(ˆθ) θ = 0 Yang Feng (Columbia University) Introduction to Simple Linear Regression 50 / 68
58 Variance of Estimator Definition: Variance of estimator Var(ˆθ) = E([ˆθ E(ˆθ)] 2 ) Remember: Var(cY ) = c 2 Var(Y ) n Var( Y i ) = n Var(Y i ) i=1 i=1 Only if the Y i are independent with finite variance Yang Feng (Columbia University) Introduction to Simple Linear Regression 51 / 68
59 Example Estimator Variance Var(ˆθ) = Var( 1 n n Y i ) i=1 = 1 n n 2 Var(Y i ) = nσ2 n 2 = σ2 n i=1 Yang Feng (Columbia University) Introduction to Simple Linear Regression 52 / 68
60 Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + [Bias(ˆθ)] 2 Yang Feng (Columbia University) Introduction to Simple Linear Regression 53 / 68
61 MSE = Var + Bias 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)] + [E(ˆθ) θ]) 2 ) = E([ˆθ E(ˆθ)] 2 ) + 2 E([E(ˆθ) θ][ˆθ E(ˆθ)]) + E([E(ˆθ) θ] 2 ) = Var(ˆθ) + 2 E([E(ˆθ)[ˆθ E(ˆθ)] θ[ˆθ E(ˆθ)])) + (Bias(ˆθ)) 2 = Var(ˆθ) + 2(0 + 0) + (Bias(ˆθ)) 2 = Var(ˆθ) + (Bias(ˆθ)) 2 Yang Feng (Columbia University) Introduction to Simple Linear Regression 54 / 68
62 Trade-off Think of variance as confidence and bias as correctness. Intuitions (largely) apply Sometimes choosing a biased estimator can result in an overall lower MSE if it has much lower variance than the unbiased one. Yang Feng (Columbia University) Introduction to Simple Linear Regression 55 / 68
63 s 2 estimator for σ 2 for Single population Sum of Squares: Sample Variance Estimator: n (Y i Ȳ ) 2 i=1 s 2 = n i=1 (Y i Ȳ )2 n 1 s 2 is an unbiased estimator of σ 2. The sum of squares SSE has n 1 degrees of freedom associated with it, one degree of freedom is lost by using Ȳ as an estimate of the unknown population mean µ. Yang Feng (Columbia University) Introduction to Simple Linear Regression 56 / 68
64 Estimating Error Term Variance σ 2 for Regression Model Regression model Variance of each observation Y i is σ 2 (the same as for the error term ɛ i ) Each Y i comes from a different probability distribution with different means that depend on the level X i The deviation of an observation Y i must be calculated around its own estimated mean. Yang Feng (Columbia University) Introduction to Simple Linear Regression 57 / 68
65 s 2 estimator for σ 2 s 2 = MSE = SSE n 2 = (Yi Ŷ i ) 2 n 2 MSE is an unbiased estimator of σ 2 E(MSE) = σ 2 = e 2 i n 2 The sum of squares SSE has n 2 degrees of freedom associated with it. Cochran s theorem (later in the course) tells us where degree s of freedom come from and how to calculate them. Yang Feng (Columbia University) Introduction to Simple Linear Regression 58 / 68
66 Normal Error Regression Model No matter how the error terms ɛ i are distributed, the least squares method provides unbiased point estimators of β 0 and β 1 that also have minimum variance among all unbiased linear estimators To set up interval estimates and make tests we need to specify the distribution of the ɛ i We will assume that the ɛ i are normally distributed. Yang Feng (Columbia University) Introduction to Simple Linear Regression 59 / 68
67 Normal Error Regression Model Y i = β 0 + β 1 X i + ɛ i Y i value of the response variable in the i th trial β 0 and β 1 are parameters X i is a known constant, the value of the predictor variable in the i th trial ɛ i iid N(0, σ 2 ) note this is different, now we know the distribution i = 1,..., n Yang Feng (Columbia University) Introduction to Simple Linear Regression 60 / 68
68 Notational Convention When you see ɛ i iid N(0, σ 2 ) It is read as ɛ i is identically and independently distributed according to a normal distribution with mean 0 and variance σ 2 Yang Feng (Columbia University) Introduction to Simple Linear Regression 61 / 68
69 Maximum Likelihood Principle The method of maximum likelihood chooses as estimates those values of the parameters that are most consistent with the sample data. Yang Feng (Columbia University) Introduction to Simple Linear Regression 62 / 68
70 Likelihood Function If Y i F (Θ), i = 1... n then the likelihood function is n L({Y i } n i=1, Θ) = f (Y i ; Θ) i=1 Yang Feng (Columbia University) Introduction to Simple Linear Regression 63 / 68
71 Maximum Likelihood Estimation The likelihood function can be maximized w.r.t. the parameter(s) Θ, doing this one can arrive at estimators for parameters as well. L({Y i } n i=1, Θ) = n f (Y i ; Θ) To do this, find solutions to (analytically or by following gradient) i=1 dl({y i } n i=1, Θ) dθ = 0 Yang Feng (Columbia University) Introduction to Simple Linear Regression 64 / 68
72 Important Trick (almost) Never maximize the likelihood function, maximize the log likelihood function instead. log(l({y i } n i=1, Θ)) = log( = n f (Y i ; Θ)) i=1 n log(f (Y i ; Θ)) Usually the log of the likelihood is easier to work with mathematically. i=1 Yang Feng (Columbia University) Introduction to Simple Linear Regression 65 / 68
73 ML Normal Regression Likelihood function L({Y i } n i=1; β 0, β 1, σ 2 ) = = n i=1 1 (2πσ 2 ) 1 (2πσ 2 ) 1 e 2σ 1/2 2 (Y i β 0 β 1 X i ) 2 1 n e 2σ n/2 2 i=1 (Y i β 0 β 1 X i ) 2. We can maximize (how?) w.r.t. to the parameters β 0, β 1, σ 2, and get... Yang Feng (Columbia University) Introduction to Simple Linear Regression 66 / 68
74 Maximum Likelihood Estimator(s) β 0 b 0 same as in least squares case β 1 b 1 same as in least squares case σ 2 ˆσ 2 = i (Y i Ŷ i ) 2 Note that ML estimator is biased as s 2 is unbiased and s 2 = MSE = n n n 2 ˆσ2 Yang Feng (Columbia University) Introduction to Simple Linear Regression 67 / 68
75 Comments Least squares minimizes the squared error between the prediction and the true output The normal distribution is fully characterized by its first two central moments (mean and variance) Yang Feng (Columbia University) Introduction to Simple Linear Regression 68 / 68
Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood
Regression Estimation - Least Squares and Maximum Likelihood Dr. Frank Wood Least Squares Max(min)imization Function to minimize w.r.t. β 0, β 1 Q = n (Y i (β 0 + β 1 X i )) 2 i=1 Minimize this by maximizing
More informationRegression Estimation Least Squares and Maximum Likelihood
Regression Estimation Least Squares and Maximum Likelihood Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 3, Slide 1 Least Squares Max(min)imization Function to minimize
More informationBias Variance Trade-off
Bias Variance Trade-off The mean squared error of an estimator MSE(ˆθ) = E([ˆθ θ] 2 ) Can be re-expressed MSE(ˆθ) = Var(ˆθ) + (B(ˆθ) 2 ) MSE = VAR + BIAS 2 Proof MSE(ˆθ) = E((ˆθ θ) 2 ) = E(([ˆθ E(ˆθ)]
More informationChapter 1. Linear Regression with One Predictor Variable
Chapter 1. Linear Regression with One Predictor Variable 1.1 Statistical Relation Between Two Variables To motivate statistical relationships, let us consider a mathematical relation between two mathematical
More informationFormal Statement of Simple Linear Regression Model
Formal Statement of Simple Linear Regression Model Y i = β 0 + β 1 X i + ɛ i Y i value of the response variable in the i th trial β 0 and β 1 are parameters X i is a known constant, the value of the predictor
More informationRegression #3: Properties of OLS Estimator
Regression #3: Properties of OLS Estimator Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #3 1 / 20 Introduction In this lecture, we establish some desirable properties associated with
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationWeek 2: Review of probability and statistics
Week 2: Review of probability and statistics Marcelo Coca Perraillon University of Colorado Anschutz Medical Campus Health Services Research Methods I HSMP 7607 2017 c 2017 PERRAILLON ALL RIGHTS RESERVED
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:
More informationChapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression
BSTT523: Kutner et al., Chapter 1 1 Chapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression Introduction: Functional relation between
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More informationOutline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model
Outline 1 Multiple Linear Regression (Estimation, Inference, Diagnostics and Remedial Measures) 2 Special Topics for Multiple Regression Extra Sums of Squares Standardized Version of the Multiple Regression
More information1. Simple Linear Regression
1. Simple Linear Regression Suppose that we are interested in the average height of male undergrads at UF. We put each male student s name (population) in a hat and randomly select 100 (sample). Then their
More informationCovariance and Correlation
Covariance and Correlation ST 370 The probability distribution of a random variable gives complete information about its behavior, but its mean and variance are useful summaries. Similarly, the joint probability
More informationSimple Linear Regression
Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.
More informationELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE)
1 ELEG 5633 Detection and Estimation Minimum Variance Unbiased Estimators (MVUE) Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Minimum Variance Unbiased Estimators (MVUE)
More informationAdvanced Signal Processing Introduction to Estimation Theory
Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,
More informationSimple and Multiple Linear Regression
Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where
More informationEIE6207: Estimation Theory
EIE6207: Estimation Theory Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: Steven M.
More informationEstimation MLE-Pandemic data MLE-Financial crisis data Evaluating estimators. Estimation. September 24, STAT 151 Class 6 Slide 1
Estimation September 24, 2018 STAT 151 Class 6 Slide 1 Pandemic data Treatment outcome, X, from n = 100 patients in a pandemic: 1 = recovered and 0 = not recovered 1 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 1 1 1
More informationFinal Review. Yang Feng. Yang Feng (Columbia University) Final Review 1 / 58
Final Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Final Review 1 / 58 Outline 1 Multiple Linear Regression (Estimation, Inference) 2 Special Topics for Multiple
More informationMS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari
MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind
More informationCentral Limit Theorem ( 5.3)
Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately
More informationStatistics and Econometrics I
Statistics and Econometrics I Point Estimation Shiu-Sheng Chen Department of Economics National Taiwan University September 13, 2016 Shiu-Sheng Chen (NTU Econ) Statistics and Econometrics I September 13,
More informationInference in Regression Analysis
Inference in Regression Analysis Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 4, Slide 1 Today: Normal Error Regression Model Y i = β 0 + β 1 X i + ǫ i Y i value
More informationLinear models and their mathematical foundations: Simple linear regression
Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction
More informationSimple Linear Regression
Simple Linear Regression ST 430/514 Recall: A regression model describes how a dependent variable (or response) Y is affected, on average, by one or more independent variables (or factors, or covariates)
More informationECON The Simple Regression Model
ECON 351 - The Simple Regression Model Maggie Jones 1 / 41 The Simple Regression Model Our starting point will be the simple regression model where we look at the relationship between two variables In
More informationSummer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.
Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall
More informationReview. December 4 th, Review
December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter
More informationY i = η + ɛ i, i = 1,...,n.
Nonparametric tests If data do not come from a normal population (and if the sample is not large), we cannot use a t-test. One useful approach to creating test statistics is through the use of rank statistics.
More informationAssociation studies and regression
Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration
More informationNonparametric Regression and Bonferroni joint confidence intervals. Yang Feng
Nonparametric Regression and Bonferroni joint confidence intervals Yang Feng Simultaneous Inferences In chapter 2, we know how to construct confidence interval for β 0 and β 1. If we want a confidence
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationCh 2: Simple Linear Regression
Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component
More informationMachine Learning for OR & FE
Machine Learning for OR & FE Supervised Learning: Regression I Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Some of the
More informationCorrelation and Regression
Correlation and Regression October 25, 2017 STAT 151 Class 9 Slide 1 Outline of Topics 1 Associations 2 Scatter plot 3 Correlation 4 Regression 5 Testing and estimation 6 Goodness-of-fit STAT 151 Class
More informationStatistics II Lesson 1. Inference on one population. Year 2009/10
Statistics II Lesson 1. Inference on one population Year 2009/10 Lesson 1. Inference on one population Contents Introduction to inference Point estimators The estimation of the mean and variance Estimating
More informationLinear Models in Machine Learning
CS540 Intro to AI Linear Models in Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu We briefly go over two linear models frequently used in machine learning: linear regression for, well, regression,
More informationAn Introduction to Bayesian Linear Regression
An Introduction to Bayesian Linear Regression APPM 5720: Bayesian Computation Fall 2018 A SIMPLE LINEAR MODEL Suppose that we observe explanatory variables x 1, x 2,..., x n and dependent variables y 1,
More informationECON 4160, Autumn term Lecture 1
ECON 4160, Autumn term 2017. Lecture 1 a) Maximum Likelihood based inference. b) The bivariate normal model Ragnar Nymoen University of Oslo 24 August 2017 1 / 54 Principles of inference I Ordinary least
More informationEstimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X.
Estimating σ 2 We can do simple prediction of Y and estimation of the mean of Y at any value of X. To perform inferences about our regression line, we must estimate σ 2, the variance of the error term.
More information1 Least Squares Estimation - multiple regression.
Introduction to multiple regression. Fall 2010 1 Least Squares Estimation - multiple regression. Let y = {y 1,, y n } be a n 1 vector of dependent variable observations. Let β = {β 0, β 1 } be the 2 1
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More informationRegression and Covariance
Regression and Covariance James K. Peterson Department of Biological ciences and Department of Mathematical ciences Clemson University April 16, 2014 Outline A Review of Regression Regression and Covariance
More informationSGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection
SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More informationMaximum Likelihood Estimation
Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter
More informationThe Simple Linear Regression Model
The Simple Linear Regression Model Lesson 3 Ryan Safner 1 1 Department of Economics Hood College ECON 480 - Econometrics Fall 2017 Ryan Safner (Hood College) ECON 480 - Lesson 3 Fall 2017 1 / 77 Bivariate
More informationPART I. (a) Describe all the assumptions for a normal error regression model with one predictor variable,
Concordia University Department of Mathematics and Statistics Course Number Section Statistics 360/2 01 Examination Date Time Pages Final December 2002 3 hours 6 Instructors Course Examiner Marks Y.P.
More informationNeed for Several Predictor Variables
Multiple regression One of the most widely used tools in statistical analysis Matrix expressions for multiple regression are the same as for simple linear regression Need for Several Predictor Variables
More informationRegression Models - Introduction
Regression Models - Introduction In regression models there are two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent
More informationECE531 Lecture 10b: Maximum Likelihood Estimation
ECE531 Lecture 10b: Maximum Likelihood Estimation D. Richard Brown III Worcester Polytechnic Institute 05-Apr-2011 Worcester Polytechnic Institute D. Richard Brown III 05-Apr-2011 1 / 23 Introduction So
More informationHT Introduction. P(X i = x i ) = e λ λ x i
MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework
More informationProbability and Statistics Notes
Probability and Statistics Notes Chapter Five Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Five Notes Spring 2011 1 / 37 Outline 1
More informationProblem Set 2 Solution Sketches Time Series Analysis Spring 2010
Problem Set 2 Solution Sketches Time Series Analysis Spring 2010 Forecasting 1. Let X and Y be two random variables such that E(X 2 ) < and E(Y 2 )
More informationMultiple Linear Regression
Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there
More informationApplied Regression. Applied Regression. Chapter 2 Simple Linear Regression. Hongcheng Li. April, 6, 2013
Applied Regression Chapter 2 Simple Linear Regression Hongcheng Li April, 6, 2013 Outline 1 Introduction of simple linear regression 2 Scatter plot 3 Simple linear regression model 4 Test of Hypothesis
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationDiagnostics and Remedial Measures
Diagnostics and Remedial Measures Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Diagnostics and Remedial Measures 1 / 72 Remedial Measures How do we know that the regression
More informationThe Simple Regression Model. Part II. The Simple Regression Model
Part II The Simple Regression Model As of Sep 22, 2015 Definition 1 The Simple Regression Model Definition Estimation of the model, OLS OLS Statistics Algebraic properties Goodness-of-Fit, the R-square
More informationBasic Probability Reference Sheet
February 27, 2001 Basic Probability Reference Sheet 17.846, 2001 This is intended to be used in addition to, not as a substitute for, a textbook. X is a random variable. This means that X is a variable
More informationSimple Regression Model Setup Estimation Inference Prediction. Model Diagnostic. Multiple Regression. Model Setup and Estimation.
Statistical Computation Math 475 Jimin Ding Department of Mathematics Washington University in St. Louis www.math.wustl.edu/ jmding/math475/index.html October 10, 2013 Ridge Part IV October 10, 2013 1
More information10. Linear Models and Maximum Likelihood Estimation
10. Linear Models and Maximum Likelihood Estimation ECE 830, Spring 2017 Rebecca Willett 1 / 34 Primary Goal General problem statement: We observe y i iid pθ, θ Θ and the goal is to determine the θ that
More informationLinear Models and Estimation by Least Squares
Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:
More informationLectures on Simple Linear Regression Stat 431, Summer 2012
Lectures on Simple Linear Regression Stat 43, Summer 0 Hyunseung Kang July 6-8, 0 Last Updated: July 8, 0 :59PM Introduction Previously, we have been investigating various properties of the population
More informationLinear Algebra Review
Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and
More informationF & B Approaches to a simple model
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys
More informationMultiple Regression. Dr. Frank Wood. Frank Wood, Linear Regression Models Lecture 12, Slide 1
Multiple Regression Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 12, Slide 1 Review: Matrix Regression Estimation We can solve this equation (if the inverse of X
More information2.1 Linear regression with matrices
21 Linear regression with matrices The values of the independent variables are united into the matrix X (design matrix), the values of the outcome and the coefficient are represented by the vectors Y and
More informationSTA 256: Statistics and Probability I
Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested
More information6.867 Machine Learning
6.867 Machine Learning Problem set 1 Solutions Thursday, September 19 What and how to turn in? Turn in short written answers to the questions explicitly stated, and when requested to explain or prove.
More information1 Motivation for Instrumental Variable (IV) Regression
ECON 370: IV & 2SLS 1 Instrumental Variables Estimation and Two Stage Least Squares Econometric Methods, ECON 370 Let s get back to the thiking in terms of cross sectional (or pooled cross sectional) data
More informationRegression, Ridge Regression, Lasso
Regression, Ridge Regression, Lasso Fabio G. Cozman - fgcozman@usp.br October 2, 2018 A general definition Regression studies the relationship between a response variable Y and covariates X 1,..., X n.
More informationPh.D. Preliminary Examination Statistics June 2, 2014
Ph.D. Preliminary Examination Statistics June, 04 NOTES:. The exam is worth 00 points.. Partial credit may be given for partial answers if possible.. There are 5 pages in this exam paper. I have neither
More informationParametric Techniques Lecture 3
Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to
More informationProblem Set 1. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 20
Problem Set MAS 6J/.6J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 0 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain a
More informationMultiple regression. CM226: Machine Learning for Bioinformatics. Fall Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar
Multiple regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Multiple regression 1 / 36 Previous two lectures Linear and logistic
More informationChapter 1 Linear Regression with One Predictor
STAT 525 FALL 2018 Chapter 1 Linear Regression with One Predictor Professor Min Zhang Goals of Regression Analysis Serve three purposes Describes an association between X and Y In some applications, the
More informationProblem 1 (20) Log-normal. f(x) Cauchy
ORF 245. Rigollet Date: 11/21/2008 Problem 1 (20) f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.0 0.2 0.4 0.6 0.8 4 2 0 2 4 Normal (with mean -1) 4 2 0 2 4 Negative-exponential x x f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.5
More informationApplied Econometrics - QEM Theme 1: Introduction to Econometrics Chapter 1 + Probability Primer + Appendix B in PoE
Applied Econometrics - QEM Theme 1: Introduction to Econometrics Chapter 1 + Probability Primer + Appendix B in PoE Warsaw School of Economics Outline 1. Introduction to econometrics 2. Denition of econometrics
More informationf X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx
INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't
More informationSTAT 4385 Topic 03: Simple Linear Regression
STAT 4385 Topic 03: Simple Linear Regression Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Spring, 2017 Outline The Set-Up Exploratory Data Analysis
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationChapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables
Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results
More informationLecture 16 - Correlation and Regression
Lecture 16 - Correlation and Regression Statistics 102 Colin Rundel April 1, 2013 Modeling numerical variables Modeling numerical variables So far we have worked with single numerical and categorical variables,
More informationRegression Models - Introduction
Regression Models - Introduction In regression models, two types of variables that are studied: A dependent variable, Y, also called response variable. It is modeled as random. An independent variable,
More information9. Linear Regression and Correlation
9. Linear Regression and Correlation Data: y a quantitative response variable x a quantitative explanatory variable (Chap. 8: Recall that both variables were categorical) For example, y = annual income,
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More information1 Probability theory. 2 Random variables and probability theory.
Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major
More informationMaking sense of Econometrics: Basics
Making sense of Econometrics: Basics Lecture 2: Simple Regression Egypt Scholars Economic Society Happy Eid Eid present! enter classroom at http://b.socrative.com/login/student/ room name c28efb78 Outline
More informationSTAT 4385 Topic 01: Introduction & Review
STAT 4385 Topic 01: Introduction & Review Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso xsu@utep.edu Spring, 2016 Outline Welcome What is Regression Analysis? Basics
More informationSimple Linear Regression
Simple Linear Regression EdPsych 580 C.J. Anderson Fall 2005 Simple Linear Regression p. 1/80 Outline 1. What it is and why it s useful 2. How 3. Statistical Inference 4. Examining assumptions (diagnostics)
More informationECO220Y Simple Regression: Testing the Slope
ECO220Y Simple Regression: Testing the Slope Readings: Chapter 18 (Sections 18.3-18.5) Winter 2012 Lecture 19 (Winter 2012) Simple Regression Lecture 19 1 / 32 Simple Regression Model y i = β 0 + β 1 x
More informationEcon 2120: Section 2
Econ 2120: Section 2 Part I - Linear Predictor Loose Ends Ashesh Rambachan Fall 2018 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted
More information1. Regressions and Regression Models. 2. Model Example. EEP/IAS Introductory Applied Econometrics Fall Erin Kelley Section Handout 1
1. Regressions and Regression Models Simply put, economists use regression models to study the relationship between two variables. If Y and X are two variables, representing some population, we are interested
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationFor more information about how to cite these materials visit
Author(s): Kerby Shedden, Ph.D., 2010 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution Share Alike 3.0 License: http://creativecommons.org/licenses/by-sa/3.0/
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationPart IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015
Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More information