Regression: Lecture 2

Size: px
Start display at page:

Download "Regression: Lecture 2"

Transcription

1 Regression: Lecture 2 Niels Richard Hansen April 26, 2012 Contents 1 Linear regression and least squares estimation Distributional results Non-linear effects and basis expansions Basis expansions and splines in R Missing data 7 A Derivation of the least squares solution 8 1 Linear regression and least squares estimation General setup Observations y 1,..., y n the response or outcome of independent random variables Y 1,..., Y n. y = y 1. y n Y = The distribution of Y i depends upon explanatory variables, regressors or covariates x i R m and an unknown parameter θ. The n p design matrix is 1 x T 1 X =.. p = m x T n Y 1. Y n. Least squares regression If Y i is real valued with mean µ i (θ) = µ i (x i, θ) the sum of squares S = S(θ) = i (y i µ i (x i, θ)) 2 = y µ(θ) 2 = (y µ(θ)) T (y µ(θ))

2 is a measure of fit as a function of θ. Minimization of S(θ) gives the least squares estimator of θ. A generalization is the weighted least squares estimator obtained by minimizing the weighted sum of squares S(θ) = (y µ(θ)) T V 1 (y µ(θ)) where V is a positive definite matrix. An n n matrix is positive definite if y T Vy > 0 for all y R n with y 0. A special type of weight matrix is a diagonal matrix such that w V 1 0 w = w n in which case S(θ) = i w i (y i µ i (x i, θ)) 2. Linear models If θ = β R p and m µ i (β) = β 0 + x ij β j = (1, x T i )β j=1 the model is linear in the parameters. The weighted sum of squares can be expressed using the design matrix; S(β) = (y Xβ) T V 1 (y Xβ). The function S(β) is a quadratic function in β and in fact a convex function. Linear modeling is very popular and useful, and it is possible to derive many theoretical results about the resulting estimator the linear (weighted) least squares estimator. All these mathematical derivations rely on assuming that the mean is exactly linear in β. In practice one should always regard the linear model as an approximation; µ i (β) β 0 + m x ij β j, where the precision of the approximation is swept under the carpet. Linearity is for differentiable functions a good local approximation and it may extend reasonably to the convex hull of the x i s. However, we should always make a serious attempt to check that by some j=1 2

3 sort of model control. Having done so, and having computed the estimate ˆβ, interpolation approximation of the mean of Y as (1, x T ) ˆβ for x R m in the convex hull of the observed x i s is usually OK, whereas extrapolation is always dangerous. Extrapolation is strongly model dependent and we do not have data to justify that the model is adequate for extrapolation. The least squares solution The least squares solution(s) is characterized as the solution to the linear equation X T V 1 Xβ = X T V 1 y. If X has full rank p there is a unique solution, otherwise the solution is not unique. In practice, the matrix inverse of X T V 1 X is not needed to solve the equation. Implementations will typically solve the equation directly, perhaps using a QR decomposition of X or a Cholesky decomposition of X T V 1 X. 1.1 Distributional results Distributional results The errors are ε i = Y i (1, x i ) T β Assumption 1: ε 1,..., ε n are (conditionally on x 1,..., x n ) uncorrelated with mean value 0 and same variance σ 2. Assumption 2: ε 1,..., ε n are (conditionally on x 1,..., x n ) independent and identically distributed N(0, σ 2 ). Under Assumption 1 the least squares estimator is unbiased, E( ˆβ) = β, var( ˆβ) = σ 2 (X T X) 1 and ˆσ 2 = 1 n (y i (1, x T i ) n p ˆβ) 2 = 1 S( ˆβ) n p i=1 is an unbiased estimator of the variance σ 2. Proof: Under Assumption 1 E(Y ) = Xβ and var(y) = σ 2 I n hence E( ˆβ) = (X T X) 1 X T Xβ = β var( ˆβ) = (X T X) 1 X T σ 2 I n X(X T X) 1 = σ 2 (X T X) 1. Distributional results Under Assumption 2 where the errors are assumed independent and normally distributed we can strengthen the conclusions and have ˆβ N(β, σ 2 (X T X) 1 ) 3

4 The standardized Z-score Or more generally for any a R p (n p)ˆσ 2 σ 2 χ 2 n p. ˆβ j β j Z j = t n p. ˆσ (X T X) 1 jj a T ˆβ a T β ˆσ a T (X T X) 1 a t n p. More distributional results A 95% confidence interval for a T β is thus obtained as a T ˆβ ± zn pˆσ a T (X T X) 1 a (1) where ˆσ a T (X T X) 1 a is the estimated standard error of a T ˆβ and z n p is the 97.5% quantile in the t n p -distribution. The test for a general r-dimensional linear restriction on β (the assumption that β is in a p r dimensional linear subspace) can be done using the F -test with (r, n p) degrees of freedom. See EH, Chapter 10, for the details. Using lm in R Linear models are fitted in R using the lm function. The model is specified by a formula of the form a ~ b + c + d describing the response a and the explanatory variables b, c and d and how they enter the model. The formula combined with the data results in a design matrix a model matrix in R terminology. 2 Non-linear effects and basis expansions Transformations and non-linear effects Observed or measured explanatory variables need not enter directly in the linear regression model. Any prespecified transformation is possible. The R-language allows for model formulas of the form 4

5 a ~ b + I(b^2) + log(c) + exp(d) The expansion of the b term as a linear combination of b and b^2 effectively means that we model a quadratic relation between the response variable a and the explanatory variable b. Note the need for the special as is operator I in the R model formula above. It protects the squaring from being interpreted in a model formula context (related to interactions) so that it takes its usual arithmetic meaning. It is common not to know a good transformation upfront. Instead, we can attempt a basis expansion of a variable using several terms in the model formula for a given variable just as the quadratic polynomial expansion of b above. Books can be written on the details of the various sets of basis functions that can be used to expand a non-linear effect. We will take a very direct approach here and focus on spline basis expansions. Splines Define h 1 (x) = 1, h 2 (x) = x and for ξ 1,..., ξ K the knots. h m+2 (x) = (x ξ m ) + t + = max{0, t} f(x) = M+2 m=1 β m h m (x) is a piecewise linear, continuous function. One order-m spline basis with knots ξ 1,..., ξ K is h 1 (x) = 1,..., h M (x) = x M 1, h M+l (x) = (x ξ l ) M 1 +, l = 1,..., K. Natural Cubic Splines Splines of order M are polynomials of degree M 1 beyond the boundary knots ξ 1 and ξ K. The natural cubic splines are the splines of order 4 that are linear beyond the two boundary knots. With K f(x) = β 0 + β 1 x + β 2 x 2 + β 3 x 3 + θ k (x ξ k ) 3 + k=1 the restriction is that β 2 = β 3 = 0 = K k=1 θ k = K k=1 θ kξ k = 0. and N 1 (x) = 1, N 2 (x) = x N 2+l (x) = (x ξ l) 3 + (x ξ K) 3 + ξ K ξ l (x ξk 1)3 + (x ξ K) 3 + ξ K ξ K 1 for l = 1,..., K 2 form a basis. Obviously β 2 = β 3 = 0 and then beyond the last knot the second derivative of f is K K K f (x) = 6θ k (x ξ k ) = 6x θ k 6 θ k ξ k, k=1 5 k=1 k=1

6 which is zero for all x if and only if the conditions above are fulfilled. For N 2+l we see that θ l =, θ K 1 =, θ K = ξ K ξ l ξ K ξ K 1 ξ K ξ K 1 ξ K ξ l and the condition is easily verified. By evaluating the functions in the knots, say, it is on the other hand easy to see that the K different functions are linearly independent. Therefore they must span the space of natural cubic splines of co-dimension 4 in the set of cubic splines. Interactions and tensor products Interactions between continuous variables is difficult the simplest model being a ~ b + c + b:c which, for given value of c is linear in b and vice versa, but jointly a quadratic function. If the effects of both variables are expanded in a basis h 1,..., h K the bivariate functions h i h j (x, y) = h i (x)h j (y) form the tensor product basis for expanding interaction effects. Statistics with basis expansions Expanding the effect of a variable using K basis functions h 1,..., h K should generally be understood and analyzed as follows: The estimated function ĥ = k ˆβ k h k is interpretable and informative whereas the individual parameters ˆβ k are typically not. One should consider combined F -tests that h is 0 or h is linear against the model with a fully expanded h and not parallel or successive tests of individual parameters. Pointwise confidence intervals for ĥ(x) are easily computed by observing that ĥ(x) = a T ˆβ, ak = h k (x) cf. (1). 2.1 Basis expansions and splines in R It is possible to expand the effect explicitly in the model formula but it is typically not desirable. We would rather have a single term in the model formula that describes our intention to expand a given variable using a function basis. This is possible by using a function in the formula specification that, if given a vector of x-values, returns a matrix of basis function evaluations with one column per basis function. The poly function is one such example that can be used to expand the effect in orthogonal polynomials. Another is ns that produces a B-spline basis for the natural cubic splines. In either case the number of basis functions that one wants to use has to be specified. 6

7 For polynomials in terms of the total degree of the fitted polynomial, and for the splines either in terms of the number of basis functions (the degrees of freedom, and implicitly the number of knots) or by the specific positioning of knots. The rcs function from Harrell s rms package gives a different set of basis functions for the natural cubic splines. Expansions of interaction effects using tensor product bases can be done by including a term in the formula like rcs(a):rcs(b). The very convenient consequence of using functions like poly, ns and rcs is that we can then use anova to test the combined effect of the expanded term, and we can easily obtain term estimates and confidence bands using the predict function. 3 Missing data In this discussion of missing data we formulate the concepts only in terms of the explanatory variables x i. There is no real loss of generality as we can simply regard the distinguished response in these circumstances as just another explanatory variable. Moreover, we consider each observation individually and will avoid the subscript index. Missing completely at random (MCAR) In words: The reason that x j is missing is unrelated to the actual value of the entire vector x. Mechanism: For the j th variable a (loaded) coin flip with probability p j of missingness determines if the variable is missing. Mathematical: If Z j is an indicator variable of whether x j is missing then Z j X 1,..., X m. Deleting observations with missing variables under MCAR will not bias the result but it can lead to inefficient usage of the data available. Missing at random (MAR) In words: The reason that x j is missing is unrelated to the actual values of the missing variables in x. Mechanism: For all index sets z (a subset of {1,..., m}) the probability that z is the indices of observed variables has probability p(z x z ). Here x z denotes the observed part of x corresponding to z. Mathematical: If Z is the random index set of the observed variables in x i then for all z. (Z = z) X X z 7

8 MAR allows us to predict, or impute, the missing variables using X z c X z = x z e.g. continuous variables as ˆX z c = E(X z c X z = x z ). If the X variables are all discrete, the conditional independence can be expressed in terms of point probabilities and the joint distribution takes the form p(z x z )q(x z c x z )r(x z ). For the general case we can take such a factorization of the joint density w.r.t. a suitable product measure as the definition. Imputation Assuming MAR we build regression models from the available data and: Impute a single best guess from the regression model. Impute a randomly sampled value from the regression model, which reflects the variability in the remaining data set. Do multiple imputations each time using the resulting data set with imputed values for the desired regression analysis and then average parameter estimates. General advice: It is better to invent data (impute) than to discard data. A Derivation of the least squares solution The least squares solution the geometric way With L = {Xβ β R p } the column space of X the quantity S(β) = y Xβ 2 V is minimized whenever Xβ is the orthogonal projection of y onto L in the inner product given by V 1. Here V is used to denote the corresponding norm induced by this inner product. The column space projection equals whenever X has full rank p. P = X(X T V 1 X) 1 X T V 1 In this case Xβ = P y has the unique solution ˆβ = (X T V 1 X) 1 X T V 1 y. 8

9 One verifies that P is in fact the orthogonal projection by verifying three characterizing properties: P L = L P 2 = X(X T X) 1 V 1 X T V 1 X(X T V 1 X) 1 X T V 1 = X(X T V 1 X) 1 X T V 1 = P P T V 1 = (X(X T V 1 X) 1 X T V 1 ) T V 1 = V 1 X(X T V 1 X) 1 X T V 1 = V 1 P. The latter property is self-adjointness w.r.t. the inner product given by V 1. If X does not have full rank p the projection is still well defined and can be written as P = X(X T V 1 X) X T V 1 where (X T V 1 X) denotes a generalized inverse. A generalized inverse of a matrix A is a matrix A with the property that AA A = A and using this property one easily verifies the same three conditions for the projection. In this case, however, there is not a unique solution to Xβ = P y. The least squares solution the calculus way Since S(β) = (y Xβ) T V 1 (y Xβ) D β S(β) = 2(y Xβ) T V 1 X The derivative is a 1 p dimensional matrix a row vector. The gradient is β S(β) = D β S(β) T. D 2 βs(β) = 2X T V 1 X. If X has rank p, Dβ 2 S(β) is (globally) positive definite and there is a unique minimizer found by solving D β S(β) = 0. The solution is ˆβ = (X T V 1 X) 1 X T V 1 y. For the differentiation it may be useful to think of S(β) as a composition of the function a(β) = (y Xβ) from R p to R n with derivative D β a(β) = X and then the function b(z) = z 2 V = z T V 1 z from R n to R with derivative D 1 z b(z) = 2z T V 1. By the chain rule D β S(β) = D z b(a(β)))d β a(β) = 2(y Xβ) T V 1 X 9

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives

More information

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013 18.S096 Problem Set 3 Fall 013 Regression Analysis Due Date: 10/8/013 he Projection( Hat ) Matrix and Case Influence/Leverage Recall the setup for a linear regression model y = Xβ + ɛ where y and ɛ are

More information

Data Mining Stat 588

Data Mining Stat 588 Data Mining Stat 588 Lecture 02: Linear Methods for Regression Department of Statistics & Biostatistics Rutgers University September 13 2011 Regression Problem Quantitative generic output variable Y. Generic

More information

AMS-207: Bayesian Statistics

AMS-207: Bayesian Statistics Linear Regression How does a quantity y, vary as a function of another quantity, or vector of quantities x? We are interested in p(y θ, x) under a model in which n observations (x i, y i ) are exchangeable.

More information

Linear Regression Model. Badr Missaoui

Linear Regression Model. Badr Missaoui Linear Regression Model Badr Missaoui Introduction What is this course about? It is a course on applied statistics. It comprises 2 hours lectures each week and 1 hour lab sessions/tutorials. We will focus

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Theoretical Exercises Statistical Learning, 2009

Theoretical Exercises Statistical Learning, 2009 Theoretical Exercises Statistical Learning, 2009 Niels Richard Hansen April 20, 2009 The following exercises are going to play a central role in the course Statistical learning, block 4, 2009. The exercises

More information

Statistical Methods. Missing Data snijders/sm.htm. Tom A.B. Snijders. November, University of Oxford 1 / 23

Statistical Methods. Missing Data  snijders/sm.htm. Tom A.B. Snijders. November, University of Oxford 1 / 23 1 / 23 Statistical Methods Missing Data http://www.stats.ox.ac.uk/ snijders/sm.htm Tom A.B. Snijders University of Oxford November, 2011 2 / 23 Literature: Joseph L. Schafer and John W. Graham, Missing

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

9. Least squares data fitting

9. Least squares data fitting L. Vandenberghe EE133A (Spring 2017) 9. Least squares data fitting model fitting regression linear-in-parameters models time series examples validation least squares classification statistics interpretation

More information

Homoskedasticity. Var (u X) = σ 2. (23)

Homoskedasticity. Var (u X) = σ 2. (23) Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This

More information

STAT 540: Data Analysis and Regression

STAT 540: Data Analysis and Regression STAT 540: Data Analysis and Regression Wen Zhou http://www.stat.colostate.edu/~riczw/ Email: riczw@stat.colostate.edu Department of Statistics Colorado State University Fall 205 W. Zhou (Colorado State

More information

Math 423/533: The Main Theoretical Topics

Math 423/533: The Main Theoretical Topics Math 423/533: The Main Theoretical Topics Notation sample size n, data index i number of predictors, p (p = 2 for simple linear regression) y i : response for individual i x i = (x i1,..., x ip ) (1 p)

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html 1 / 42 Passenger car mileage Consider the carmpg dataset taken from

More information

Ch 3: Multiple Linear Regression

Ch 3: Multiple Linear Regression Ch 3: Multiple Linear Regression 1. Multiple Linear Regression Model Multiple regression model has more than one regressor. For example, we have one response variable and two regressor variables: 1. delivery

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA

Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA March 6, 2017 KC Border Linear Regression II March 6, 2017 1 / 44 1 OLS estimator 2 Restricted regression 3 Errors in variables 4

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Linear Model Under General Variance

Linear Model Under General Variance Linear Model Under General Variance We have a sample of T random variables y 1, y 2,, y T, satisfying the linear model Y = X β + e, where Y = (y 1,, y T )' is a (T 1) vector of random variables, X = (T

More information

EC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε,

EC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε, THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation (1) y = β 0 + β 1 x 1 + + β k x k + ε, which can be written in the following form: (2) y 1 y 2.. y T = 1 x 11... x 1k 1

More information

REGRESSION ANALYSIS AND INDICATOR VARIABLES

REGRESSION ANALYSIS AND INDICATOR VARIABLES REGRESSION ANALYSIS AND INDICATOR VARIABLES Thesis Submitted in partial fulfillment of the requirements for the award of degree of Masters of Science in Mathematics and Computing Submitted by Sweety Arora

More information

The Standard Linear Model: Hypothesis Testing

The Standard Linear Model: Hypothesis Testing Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 25: The Standard Linear Model: Hypothesis Testing Relevant textbook passages: Larsen Marx [4]:

More information

Sensitivity of GLS estimators in random effects models

Sensitivity of GLS estimators in random effects models of GLS estimators in random effects models Andrey L. Vasnev (University of Sydney) Tokyo, August 4, 2009 1 / 19 Plan Plan Simulation studies and estimators 2 / 19 Simulation studies Plan Simulation studies

More information

Linear models and their mathematical foundations: Simple linear regression

Linear models and their mathematical foundations: Simple linear regression Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Regression based methods, 1st part: Introduction (Sec.

More information

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B Simple Linear Regression 35 Problems 1 Consider a set of data (x i, y i ), i =1, 2,,n, and the following two regression models: y i = β 0 + β 1 x i + ε, (i =1, 2,,n), Model A y i = γ 0 + γ 1 x i + γ 2

More information

11 Hypothesis Testing

11 Hypothesis Testing 28 11 Hypothesis Testing 111 Introduction Suppose we want to test the hypothesis: H : A q p β p 1 q 1 In terms of the rows of A this can be written as a 1 a q β, ie a i β for each row of A (here a i denotes

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

Optimal design of experiments

Optimal design of experiments Optimal design of experiments Session 4: Some theory Peter Goos / 40 Optimal design theory continuous or approximate optimal designs implicitly assume an infinitely large number of observations are available

More information

Lecture 20: Linear model, the LSE, and UMVUE

Lecture 20: Linear model, the LSE, and UMVUE Lecture 20: Linear model, the LSE, and UMVUE Linear Models One of the most useful statistical models is X i = β τ Z i + ε i, i = 1,...,n, where X i is the ith observation and is often called the ith response;

More information

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models University of Illinois Fall 2016 Department of Economics Roger Koenker Economics 536 Lecture 7 Introduction to Specification Testing in Dynamic Econometric Models In this lecture I want to briefly describe

More information

Lecture 6: Geometry of OLS Estimation of Linear Regession

Lecture 6: Geometry of OLS Estimation of Linear Regession Lecture 6: Geometry of OLS Estimation of Linear Regession Xuexin Wang WISE Oct 2013 1 / 22 Matrix Algebra An n m matrix A is a rectangular array that consists of nm elements arranged in n rows and m columns

More information

,i = 1,2,L, p. For a sample of size n, let the columns of data be

,i = 1,2,L, p. For a sample of size n, let the columns of data be MAC IIci: Miller Asymptotics Chapter 5: Regression Section?: Asymptotic Relationship Between a CC and its Associated Slope Estimates in Multiple Linear Regression The asymptotic null distribution of a

More information

The linear model is the most fundamental of all serious statistical models encompassing:

The linear model is the most fundamental of all serious statistical models encompassing: Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x

More information

May 9, 2014 MATH 408 MIDTERM EXAM OUTLINE. Sample Questions

May 9, 2014 MATH 408 MIDTERM EXAM OUTLINE. Sample Questions May 9, 24 MATH 48 MIDTERM EXAM OUTLINE This exam will consist of two parts and each part will have multipart questions. Each of the 6 questions is worth 5 points for a total of points. The two part of

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

Fractional Imputation in Survey Sampling: A Comparative Review

Fractional Imputation in Survey Sampling: A Comparative Review Fractional Imputation in Survey Sampling: A Comparative Review Shu Yang Jae-Kwang Kim Iowa State University Joint Statistical Meetings, August 2015 Outline Introduction Fractional imputation Features Numerical

More information

Linear Methods for Prediction

Linear Methods for Prediction Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we

More information

1 Least Squares Estimation - multiple regression.

1 Least Squares Estimation - multiple regression. Introduction to multiple regression. Fall 2010 1 Least Squares Estimation - multiple regression. Let y = {y 1,, y n } be a n 1 vector of dependent variable observations. Let β = {β 0, β 1 } be the 2 1

More information

14 Multiple Linear Regression

14 Multiple Linear Regression B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in

More information

Multivariate Regression Analysis

Multivariate Regression Analysis Matrices and vectors The model from the sample is: Y = Xβ +u with n individuals, l response variable, k regressors Y is a n 1 vector or a n l matrix with the notation Y T = (y 1,y 2,...,y n ) 1 x 11 x

More information

Bayesian Linear Regression

Bayesian Linear Regression Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective

More information

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna

More information

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in

More information

The outline for Unit 3

The outline for Unit 3 The outline for Unit 3 Unit 1. Introduction: The regression model. Unit 2. Estimation principles. Unit 3: Hypothesis testing principles. 3.1 Wald test. 3.2 Lagrange Multiplier. 3.3 Likelihood Ratio Test.

More information

Linear Methods for Regression. Lijun Zhang

Linear Methods for Regression. Lijun Zhang Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived

More information

Properties of the least squares estimates

Properties of the least squares estimates Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares

More information

Linear Regression. September 27, Chapter 3. Chapter 3 September 27, / 77

Linear Regression. September 27, Chapter 3. Chapter 3 September 27, / 77 Linear Regression Chapter 3 September 27, 2016 Chapter 3 September 27, 2016 1 / 77 1 3.1. Simple linear regression 2 3.2 Multiple linear regression 3 3.3. The least squares estimation 4 3.4. The statistical

More information

The Hilbert Space of Random Variables

The Hilbert Space of Random Variables The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

Linear Regression. Junhui Qian. October 27, 2014

Linear Regression. Junhui Qian. October 27, 2014 Linear Regression Junhui Qian October 27, 2014 Outline The Model Estimation Ordinary Least Square Method of Moments Maximum Likelihood Estimation Properties of OLS Estimator Unbiasedness Consistency Efficiency

More information

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No. 7. LEAST SQUARES ESTIMATION 1 EXERCISE: Least-Squares Estimation and Uniqueness of Estimates 1. For n real numbers a 1,...,a n, what value of a minimizes the sum of squared distances from a to each of

More information

1 One-way analysis of variance

1 One-way analysis of variance LIST OF FORMULAS (Version from 21. November 2014) STK2120 1 One-way analysis of variance Assume X ij = µ+α i +ɛ ij ; j = 1, 2,..., J i ; i = 1, 2,..., I ; where ɛ ij -s are independent and N(0, σ 2 ) distributed.

More information

WLS and BLUE (prelude to BLUP) Prediction

WLS and BLUE (prelude to BLUP) Prediction WLS and BLUE (prelude to BLUP) Prediction Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 21, 2018 Suppose that Y has mean X β and known covariance matrix V (but Y need

More information

Lecture 11: Regression Methods I (Linear Regression)

Lecture 11: Regression Methods I (Linear Regression) Lecture 11: Regression Methods I (Linear Regression) Fall, 2017 1 / 40 Outline Linear Model Introduction 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear

More information

Fitting Linear Statistical Models to Data by Least Squares II: Weighted

Fitting Linear Statistical Models to Data by Least Squares II: Weighted Fitting Linear Statistical Models to Data by Least Squares II: Weighted Brian R. Hunt and C. David Levermore University of Maryland, College Park Math 420: Mathematical Modeling April 21, 2014 version

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis. 401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis

More information

Lecture 11: Regression Methods I (Linear Regression)

Lecture 11: Regression Methods I (Linear Regression) Lecture 11: Regression Methods I (Linear Regression) 1 / 43 Outline 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear Regression Ordinary Least Squares Statistical

More information

Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam. Please choose ONE of the following options.

Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam. Please choose ONE of the following options. 1 Biostatistics 533 Classical Theory of Linear Models Spring 2007 Final Exam Name: KEY Problems do not have equal value and some problems will take more time than others. Spend your time wisely. You do

More information

4 Multiple Linear Regression

4 Multiple Linear Regression 4 Multiple Linear Regression 4. The Model Definition 4.. random variable Y fits a Multiple Linear Regression Model, iff there exist β, β,..., β k R so that for all (x, x 2,..., x k ) R k where ε N (, σ

More information

ECON 4160, Autumn term Lecture 1

ECON 4160, Autumn term Lecture 1 ECON 4160, Autumn term 2017. Lecture 1 a) Maximum Likelihood based inference. b) The bivariate normal model Ragnar Nymoen University of Oslo 24 August 2017 1 / 54 Principles of inference I Ordinary least

More information

x 21 x 22 x 23 f X 1 X 2 X 3 ε

x 21 x 22 x 23 f X 1 X 2 X 3 ε Chapter 2 Estimation 2.1 Example Let s start with an example. Suppose that Y is the fuel consumption of a particular model of car in m.p.g. Suppose that the predictors are 1. X 1 the weight of the car

More information

Local regression I. Patrick Breheny. November 1. Kernel weighted averages Local linear regression

Local regression I. Patrick Breheny. November 1. Kernel weighted averages Local linear regression Local regression I Patrick Breheny November 1 Patrick Breheny STA 621: Nonparametric Statistics 1/27 Simple local models Kernel weighted averages The Nadaraya-Watson estimator Expected loss and prediction

More information

1 Data Arrays and Decompositions

1 Data Arrays and Decompositions 1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is

More information

IEOR 165 Lecture 7 1 Bias-Variance Tradeoff

IEOR 165 Lecture 7 1 Bias-Variance Tradeoff IEOR 165 Lecture 7 Bias-Variance Tradeoff 1 Bias-Variance Tradeoff Consider the case of parametric regression with β R, and suppose we would like to analyze the error of the estimate ˆβ in comparison to

More information

Solutions to Homework Set #6 (Prepared by Lele Wang)

Solutions to Homework Set #6 (Prepared by Lele Wang) Solutions to Homework Set #6 (Prepared by Lele Wang) Gaussian random vector Given a Gaussian random vector X N (µ, Σ), where µ ( 5 ) T and 0 Σ 4 0 0 0 9 (a) Find the pdfs of i X, ii X + X 3, iii X + X

More information

Appendix A: Review of the General Linear Model

Appendix A: Review of the General Linear Model Appendix A: Review of the General Linear Model The generallinear modelis an important toolin many fmri data analyses. As the name general suggests, this model can be used for many different types of analyses,

More information

Topics in Probability and Statistics

Topics in Probability and Statistics Topics in Probability and tatistics A Fundamental Construction uppose {, P } is a sample space (with probability P), and suppose X : R is a random variable. The distribution of X is the probability P X

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

Lecture 6: Linear models and Gauss-Markov theorem

Lecture 6: Linear models and Gauss-Markov theorem Lecture 6: Linear models and Gauss-Markov theorem Linear model setting Results in simple linear regression can be extended to the following general linear model with independently observed response variables

More information

Linear Algebra and Robot Modeling

Linear Algebra and Robot Modeling Linear Algebra and Robot Modeling Nathan Ratliff Abstract Linear algebra is fundamental to robot modeling, control, and optimization. This document reviews some of the basic kinematic equations and uses

More information

Economics 620, Lecture 5: exp

Economics 620, Lecture 5: exp 1 Economics 620, Lecture 5: The K-Variable Linear Model II Third assumption (Normality): y; q(x; 2 I N ) 1 ) p(y) = (2 2 ) exp (N=2) 1 2 2(y X)0 (y X) where N is the sample size. The log likelihood function

More information

Generalized Linear Models

Generalized Linear Models Generalized Linear Models Lecture 3. Hypothesis testing. Goodness of Fit. Model diagnostics GLM (Spring, 2018) Lecture 3 1 / 34 Models Let M(X r ) be a model with design matrix X r (with r columns) r n

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Spatial Process Estimates as Smoothers: A Review

Spatial Process Estimates as Smoothers: A Review Spatial Process Estimates as Smoothers: A Review Soutir Bandyopadhyay 1 Basic Model The observational model considered here has the form Y i = f(x i ) + ɛ i, for 1 i n. (1.1) where Y i is the observed

More information

Computational Physics

Computational Physics Interpolation, Extrapolation & Polynomial Approximation Lectures based on course notes by Pablo Laguna and Kostas Kokkotas revamped by Deirdre Shoemaker Spring 2014 Introduction In many cases, a function

More information

Statistics 910, #15 1. Kalman Filter

Statistics 910, #15 1. Kalman Filter Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations

More information

Fitting Linear Statistical Models to Data by Least Squares I: Introduction

Fitting Linear Statistical Models to Data by Least Squares I: Introduction Fitting Linear Statistical Models to Data by Least Squares I: Introduction Brian R. Hunt and C. David Levermore University of Maryland, College Park Math 420: Mathematical Modeling February 5, 2014 version

More information

Applied Regression Analysis

Applied Regression Analysis Applied Regression Analysis Chapter 3 Multiple Linear Regression Hongcheng Li April, 6, 2013 Recall simple linear regression 1 Recall simple linear regression 2 Parameter Estimation 3 Interpretations of

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression ST 430/514 Recall: A regression model describes how a dependent variable (or response) Y is affected, on average, by one or more independent variables (or factors, or covariates)

More information

Zellner s Seemingly Unrelated Regressions Model. James L. Powell Department of Economics University of California, Berkeley

Zellner s Seemingly Unrelated Regressions Model. James L. Powell Department of Economics University of California, Berkeley Zellner s Seemingly Unrelated Regressions Model James L. Powell Department of Economics University of California, Berkeley Overview The seemingly unrelated regressions (SUR) model, proposed by Zellner,

More information

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, 2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for

More information

Multivariate Regression (Chapter 10)

Multivariate Regression (Chapter 10) Multivariate Regression (Chapter 10) This week we ll cover multivariate regression and maybe a bit of canonical correlation. Today we ll mostly review univariate multivariate regression. With multivariate

More information

Basis Penalty Smoothers. Simon Wood Mathematical Sciences, University of Bath, U.K.

Basis Penalty Smoothers. Simon Wood Mathematical Sciences, University of Bath, U.K. Basis Penalty Smoothers Simon Wood Mathematical Sciences, University of Bath, U.K. Estimating functions It is sometimes useful to estimate smooth functions from data, without being too precise about the

More information

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata'

Business Statistics. Tommaso Proietti. Linear Regression. DEF - Università di Roma 'Tor Vergata' Business Statistics Tommaso Proietti DEF - Università di Roma 'Tor Vergata' Linear Regression Specication Let Y be a univariate quantitative response variable. We model Y as follows: Y = f(x) + ε where

More information

Model Selection and Geometry

Model Selection and Geometry Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model

More information

Cubic Splines; Bézier Curves

Cubic Splines; Bézier Curves Cubic Splines; Bézier Curves 1 Cubic Splines piecewise approximation with cubic polynomials conditions on the coefficients of the splines 2 Bézier Curves computer-aided design and manufacturing MCS 471

More information

variability of the model, represented by σ 2 and not accounted for by Xβ

variability of the model, represented by σ 2 and not accounted for by Xβ Posterior Predictive Distribution Suppose we have observed a new set of explanatory variables X and we want to predict the outcomes ỹ using the regression model. Components of uncertainty in p(ỹ y) variability

More information

Lecture 4 Multiple linear regression

Lecture 4 Multiple linear regression Lecture 4 Multiple linear regression BIOST 515 January 15, 2004 Outline 1 Motivation for the multiple regression model Multiple regression in matrix notation Least squares estimation of model parameters

More information

Matrix Factorizations

Matrix Factorizations 1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular

More information