Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Similar documents
Chapter 5 Matrix Approach to Simple Linear Regression

Linear Algebra Review

Outline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model

Matrix Approach to Simple Linear Regression: An Overview

Need for Several Predictor Variables

Lecture 9 SLR in Matrix Form

STAT5044: Regression and Anova. Inyoung Kim

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38

Final Review. Yang Feng. Yang Feng (Columbia University) Final Review 1 / 58

STAT 540: Data Analysis and Regression

. a m1 a mn. a 1 a 2 a = a n

Phys 201. Matrices and Determinants

Appendix A: Matrices

Weighted Least Squares

14 Multiple Linear Regression

Linear Models Review

MIT Spring 2015

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Introduc)on to linear algebra

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

Regression. Oscar García

Properties of Matrices and Operations on Matrices

Linear Equations in Linear Algebra

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

PART I. (a) Describe all the assumptions for a normal error regression model with one predictor variable,

6. Multiple Linear Regression

Weighted Least Squares

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17

STAT 100C: Linear models

Lecture 3: Matrix and Matrix Operations

Lecture 6: Geometry of OLS Estimation of Linear Regession

[4+3+3] Q 1. (a) Describe the normal regression model through origin. Show that the least square estimator of the regression parameter is given by

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

Linear Algebra V = T = ( 4 3 ).

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is

LINEAR REGRESSION MODELS W4315

Lecture 13: Simple Linear Regression in Matrix Format

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

Matrices. Chapter Definitions and Notations

3 Multiple Linear Regression

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Chapter 1 Simple Linear Regression (part 6: matrix version)

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Introduction to Matrix Algebra

Matrices and vectors A matrix is a rectangular array of numbers. Here s an example: A =

Math 423/533: The Main Theoretical Topics

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:

Matrix Operations. Linear Combination Vector Algebra Angle Between Vectors Projections and Reflections Equality of matrices, Augmented Matrix

The Singular Value Decomposition

Properties of the least squares estimates

2.1 Matrices. 3 5 Solve for the variables in the following matrix equation.

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

The Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where

STAT 100C: Linear models

Matrix Basic Concepts

MATRICES AND MATRIX OPERATIONS

Chapter 1. Matrix Algebra

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Elementary maths for GMT

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin

Maximum Likelihood Estimation

Vectors and Matrices Statistics with Vectors and Matrices

POL 213: Research Methods

Matrix Algebra, part 2

=, v T =(e f ) e f B =

POLI270 - Linear Algebra

MA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1

20.1. Balanced One-Way Classification Cell means parametrization: ε 1. ε I. + ˆɛ 2 ij =

Variable Selection and Model Building

Matrix & Linear Algebra

Matrix operations Linear Algebra with Computer Science Application

STAT 360-Linear Models

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Matrices and Multivariate Statistics - II

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

Covariance and Correlation

Math Camp Notes: Linear Algebra I

Linear models. Linear models are computationally convenient and remain widely used in. applied econometric research

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Raphael Mrode. Training in quantitative genetics and genomics 30 May 10 June 2016 ILRI, Nairobi. Partner Logo. Partner Logo

ANOVA: Analysis of Variance - Part I

1. The OLS Estimator. 1.1 Population model and notation

Lecture 2. The Simple Linear Regression Model: Matrix Approach

2. Matrix Algebra and Random Vectors

Multivariate Regression (Chapter 10)

[y i α βx i ] 2 (2) Q = i=1

Linear Algebra and Matrix Inversion

Basic Concepts in Matrix Algebra

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model

EC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε,

Transcription:

Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example: A = [ 3 5 6 Rows denoted with the i subscript Columns denoted with the j subscript The element in row col is 3 The element in row 3 col is Matrix Elements often expressed using symbols A = a a a 3 a c a a a 3 a c a r a r a r3 a rc Matrix A has r rows and c columns Said to be of dimension r c Element a ij is in i th row and j th col A matrix is square if r = c Called a column vector is c= Topic 7 3 Topic 7 4

Transpose enoted as A Matrix Operations Row becomes Col, Row r becomes Col r Col becomes Row, Col c becomes Row c If A = [a ij then A = [a ji If A is r c then A is c r Addition and Subtraction Matrices must have the same dimension Addition/subtraction done on element by element basis a + b a + b a c + b c A + B = a r + b r a r + b r a rc + b rc Multiplication Matrix Operations If scalar then λa = [λa ij If multiplying two matrices (AB) Cols of A must equal Rows of B Resulting matrix of dimension Rows(A) Col(B) Elements obtained by taking cross products of rows of A with cols of B [ 4 3 3 [ 4 = [ 6 6 3 7 0 5 5 6 Topic 7 5 Topic 7 6 Regression Matrices Consider example with n = 4 Y Y Y 3 Y 4 Y Y Y 3 Y 4 Y = β 0 + β X +ε Y = β 0 + β X +ε Y 3 = β 0 + β X 3 +ε 3 Y 4 = β 0 + β X 4 +ε 4 = = β 0 + β X β 0 + β X + β 0 + β X 3 β 0 + β X 4 X X X 3 X 4 [ β0 β + Y = Xβ + ε ε ε ε 3 ε 4 ε ε ε 3 ε 4 Special Regression Examples Using multiplication and transpose Y Y = Y i X X = X Y = [ n Xi Xi X i [ Yi Xi Y i Will use these to compute ˆβ etc Topic 7 7 Topic 7 8

Special Types of Matrices Symmetric matrix When A = A Requires A to be square Example: X X iagonal matrix Square matrix with off-diagonals equal to zero Important example: Identity matrix IA = AI = A 0 0 0 0 0 0 0 0 0 0 0 0 Linear ependence If there is a relationship between the column(row) vectors of a matrix such that λ C + + λ c C c = 0 and not all λ s are 0, then the set of column(row) vectors are linearly dependent If such a relationship does not exist then the set of column(row) vectors are linearly independent Consider the matrix Q with column vectors C C 3 C = [ 5 Q = [ 5 3 0 [ 3 C = C 3 = [ 0 Topic 7 9 Topic 7 0 Rank of a Matrix Using λ =, λ = 0, λ 3 = [ 5 + 0 [ 3 + [ 0 = [ 0 0 0 The columns of Q are linearly dependent The rank of a matrix is number of linear independent columns (or rows) Rank of a matrix cannot exceed min(r,c) Full Rank all columns are linearly independent In this example: The rank of Q is Inverse of a matrix Inverse similar to the reciprocal of a scalar Inverse defined for square matrix of rank r Want to find the inverse of S, such that S S = I Easy example: iagonal matrix Let S = [ 0 0 4 then [ S 0 = 0 4 inverse of each element on the diagonal Topic 7 Topic 7

Inverse of a matrix General procedure for matrix Consider: [ a b A = c d Calculate the determinant = a d b c If = 0 then the matrix has no inverse In A, switch a and d; make c and b negative; multiply each element by A = Steps work only for a matrix Algorithm for 3 3 given in book [ d c b a Use of Inverse Consider equation x = 3 x = 3 Inverse similar to using reciprocal of a scalar Pertains to a set of equations Assuming A has an inverse: A X = C (r r) (r ) (r ) A AX = A C X = A C Topic 7 3 Topic 7 4 Random Vectors and Matrices Contain elements that are random variables Can compute expectation and (co)variance In regression set up, Y = Xβ + ε, both ε and Y are random vectors Expectation vector: E(A) = [E(A i ) Covariance matrix: symmetric σ (A) = σ (A ) σ(a, A ) σ(a, A r ) σ(a, A ) σ (A ) σ(a, A r ) σ(a r, A ) σ(a r, A ) σ (A r ) Basic Theorems Consider random vector Y Consider constant matrix A Suppose W = AY W is also a random vector E(W) = AE(Y) σ (W) = Aσ (Y)A Topic 7 5 Topic 7 6

Regression Matrices Can express observations Y = Xβ + ε Both Y and ε are random vectors E(Y) = Xβ + E(ε) = Xβ σ (Y) = 0 + σ (ε) = σ I Express quantity Q Least Squares Q = (Y Xβ) (Y Xβ) = Y Y β X Y Y Xβ + β X Xβ Taking derivative X Y + X Xβ = 0 This means b = (X X) X Y Topic 7 7 Topic 7 8 Fitted Values The fitted values Ŷ = Xb = X(X X) X Y Residual matrix Residuals e = Y Ŷ = Y HY = (I H)Y Matrix H = X(X X) X called hat matrix Equivalently write Ŷ = HY H symmetric and idempotent (HH = H) Matrix H used in diagnostics (chapter 9) e a random vector E(e) = (I H)E(Y) = (I H)Xβ = Xβ Xβ = 0 σ (e) = (I H)σ (Y)(I H) = (I H)σ I(I H) = (I H)σ Topic 7 9 Topic 7 0

Quadratic form defined as ANOVA Y AY = where A is symmetric n n matrix i j a ij Y i Y j Sums of squares can be shown to be quadratic forms (page 07) Quadratic forms play significant role in the theory of linear models when errors are Normally distributed Inference Vector b = (X X) X Y = AY The mean and variance are E(b) = (X X) X E(Y) = (X X) X Xβ = β σ (b) = Aσ (Y)A = Aσ IA = σ AA = σ (X X) Thus, b is multivariate Normal(β, σ (X X) ) Topic 7 Topic 7 Inference Continued Consider X h = [ X h Mean response Ŷh = X hb E(Ŷh) = X hβ Var(Ŷh) = X hσ (b)x h = σ X h(x X) X h Prediction E(Ŷh) = X hβ Var(Ŷh) = σ ( + X h(x X) X h ) KNNL Chapter 5 Background Reading KNNL Sections 6-65 Topic 7 3 Topic 7 4