Chapter 5 Matrix Approach to Simple Linear Regression

Similar documents
Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Linear Algebra Review

Matrix Approach to Simple Linear Regression: An Overview

Outline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model

Need for Several Predictor Variables

STAT5044: Regression and Anova. Inyoung Kim

Lecture 9 SLR in Matrix Form

. a m1 a mn. a 1 a 2 a = a n

Phys 201. Matrices and Determinants

Final Review. Yang Feng. Yang Feng (Columbia University) Final Review 1 / 58

Appendix A: Matrices

Weighted Least Squares

Regression. Oscar García

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38

Linear Models Review

Properties of Matrices and Operations on Matrices

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )

14 Multiple Linear Regression

Matrices and vectors A matrix is a rectangular array of numbers. Here s an example: A =

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

Lecture 15 Multiple regression I Chapter 6 Set 2 Least Square Estimation The quadratic form to be minimized is

POL 213: Research Methods

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

MIT Spring 2015

STAT 540: Data Analysis and Regression

Matrices. Chapter Definitions and Notations

Regression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin

Weighted Least Squares

Chapter 1 Simple Linear Regression (part 6: matrix version)

Introduc)on to linear algebra

Lecture 3: Matrix and Matrix Operations

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

Lecture 6: Geometry of OLS Estimation of Linear Regession

Elementary maths for GMT

Linear Algebra V = T = ( 4 3 ).

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

Chapter 2. Ma 322 Fall Ma 322. Sept 23-27

Estimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27

STAT 100C: Linear models

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Linear Equations in Linear Algebra

Introduction to Matrix Algebra

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Math 423/533: The Main Theoretical Topics

Raphael Mrode. Training in quantitative genetics and genomics 30 May 10 June 2016 ILRI, Nairobi. Partner Logo. Partner Logo

Lecture 13: Simple Linear Regression in Matrix Format

Properties of the least squares estimates

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:

Lecture 6: Linear models and Gauss-Markov theorem

The Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where

Xβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =

STAT Homework 8 - Solutions

MA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

LINEAR REGRESSION MODELS W4315

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013

MATRICES AND MATRIX OPERATIONS

Estimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17

3 Multiple Linear Regression

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Linear Algebra and Matrix Inversion

POLI270 - Linear Algebra

Chapter 2 Multiple Regression I (Part 1)

[y i α βx i ] 2 (2) Q = i=1

Multivariate Regression (Chapter 10)

20.1. Balanced One-Way Classification Cell means parametrization: ε 1. ε I. + ˆɛ 2 ij =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Matrix & Linear Algebra

Matrix Algebra, part 2

Ma 3/103: Lecture 25 Linear Regression II: Hypothesis Testing and ANOVA

EC3062 ECONOMETRICS. THE MULTIPLE REGRESSION MODEL Consider T realisations of the regression equation. (1) y = β 0 + β 1 x β k x k + ε,

The Singular Value Decomposition

Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

ANOVA: Analysis of Variance - Part I

CS 246 Review of Linear Algebra 01/17/19

Lecture 2. The Simple Linear Regression Model: Matrix Approach

Vectors and Matrices Statistics with Vectors and Matrices

1. The OLS Estimator. 1.1 Population model and notation

2. Matrix Algebra and Random Vectors

Reference: Davidson and MacKinnon Ch 2. In particular page

Chapter 1. Matrix Algebra

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

2.1 Matrices. 3 5 Solve for the variables in the following matrix equation.

Multiple Linear Regression

Basic Concepts in Matrix Algebra

The Matrix Algebra of Sample Statistics

2. Regression Review

Simple Linear Regression (Part 3)

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model

Ch 3: Multiple Linear Regression

STA 2101/442 Assignment Four 1

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Transcription:

STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang

Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example: A = [ ] 1 3 1 5 2 6 Rows denoted with the i subscript Columns denoted with the j subscript The element in row 1 col 2 is 3 The element in row 3 col 1 is 2 5-1

Elements often expressed using symbols a 11 a 12 a 13 a 1c a A = 21. a 22. a 23. a 2c.. a r1 a r2 a r3 a rc Matrix A has r rows and c columns Said to be of dimension r c Element a ij is in i th row and j th col A matrix is square if r = c Called a column vector if c = 1 Called a row vector if r = 1 5-2

Matrix Operations Transpose Denoted as A Row 1 becomes Column 1, Row r becomes Column r Column 1 becomes Row 1, Column c becomes Row c If A = [a ij ] then A = [a ji ] If A is r c then A is c r Addition and Subtraction Matrices must have the same dimension Addition/subtraction done on element by element basis [ ] a11 +b 11 a 12 +b 12 a 1c +b 1c A+B =.... a r1 +b r1 a r2 +b r2 a rc +b rc 5-3

Multiplication If scalar then λa = [λa ij ] If multiplying two matrices (C = AB) c ij = k a ikb kj Columns of A must equal Rows of B Resulting matrix of dimension Rows(A) Columns(B) Elements obtained by taking cross products of rows of A with columns of B [ ] 1 2 [4 ] [ ] 6 6 3 2 1 4 1 = 17 10 5 1 2 1 3 3 15 12 6 5-4

Regression Matrices Consider example with n = 4 Consider expressing observations: Y 1 Y 2 Y 3 Y 4 Y 1 = β 0 +β 1 X 1 +ε 1 Y 2 = β 0 +β 1 X 2 +ε 2 Y 3 = β 0 +β 1 X 3 +ε 3 Y 4 = β 0 +β 1 X 4 +ε 4 = Y 1 Y 2 = Y 3 Y 4 β 0 +β 1 X 1 β 0 +β 1 X 2 β 0 +β 1 X 3 β 0 +β 1 X 4 1 X 1 1 X 2 1 X 3 1 X 4 + [ ] β0 β 1 + Y = Xβ +ε ε 1 ε 2 ε 3 ε 4 ε 1 ε 2 ε 3 ε 4 5-5

Special Regression Examples Using multiplication and transpose Y Y = Y 2 i X X = [ ] n Xi Xi X 2 i X Y = [ ] Yi Xi Y i Will use these to compute ˆβ etc. 5-6

Special Types of Matrices Symmetric matrix When A = A Requires A to be square Example: X X Diagonal matrix Square matrix with off-diagonals equal to zero Important example: Identity matrix IA = AI = A I = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 5-7

Linear Dependence Consider the matrix Q = the columns of Q are vectors. C 1 = [ 5 1 1 ] [ 5 3 10 1 2 2 1 1 2 C 2 = [ 3 2 1 ] ] C 3 = [ 10 2 2 ] If there is a relationship between the columns of a matrix such that λ 1 C 1 +...+λ c C c = 0 and not all λ j s are 0, then the set of column vectors are linearly dependent. For the above example, 2C 1 +0C 2 +1C 3 = 0. If such a relationship does not exist then the set of columns are linearly independent. Columns of an identity matrix are linearly indpendent. Similarly consider rows 5-8

Rank of a Matrix The rank of a matrix is the maximum number of linear independent columns (or rows) Rank of a matrix cannot exceed min(r,c) Full Rank all columns are linearly independent Example: The rank of Q is 2 Q = 5 3 10 1 2 2 1 1 2 5-9

Inverse of a Matrix Inverse similar to the reciprocal of a scalar Inverse defined for square matrix of full rank Want to find the inverse of S, such that S S 1 = I Easy example: Diagonal matrix Let S = [ 2 0 0 4 ] S 1 = then [ 1 ] 0 2 0 1 4 inverse of each element on the diagonal 5-10

General procedure for 2 2 matrix Consider: A = [ ] a b c d 1. Calculate the determinant D = a d b c If D = 0 then the matrix has no inverse. 2. In A 1, switch a and d; make c and b negative; multiply each element by 1 D A 1 = 1 D [ d ] b c a = [ d D c D b D a D ] Steps work only for a 2 2 matrix. Algorithm for 3 3 given in book 5-11

Use of Inverse Consider equation 2x = 3 x = 3 1 2 Inverse similar to using reciprocal of a scalar Pertains to a set of equations A X = C (r r) (r 1) (r 1) Assuming A has an inverse: A 1 AX = A 1 C X = A 1 C 5-12

Random Vectors and Matrices Contain elements that are random variables Can compute expectation and (co)variance In regression set up, Y = Xβ +ε, both ε and Y are random vectors Expectation vector: E(Y) = [E(Y i )] Covariance matrix: symmetric σ 2 (Y 1 ) σ(y 1,Y 2 ) σ(y 1,Y n ) σ 2 σ(y (Y) = 2,Y 1 ) σ 2 (Y 2 ) σ(y 2,Y n ).... σ(y n,y 1 ) σ(y n,y 2 ) σ 2 (Y n ) 5-13

Basic Theorems Consider random vector Y Consider constant matrix A Suppose W = AY W is also a random vector E(W) = A E(Y) σ 2 (W) = A σ 2 (Y) A 5-14

Regression Matrices Can express observations Y = Xβ +ε Both Y and ε are random vectors E(Y) = Xβ +E(ε) = Xβ σ 2 (Y) = 0 +σ 2 (ε) = σ 2 I 5-15

Least Squares Express quantity Q (Xβ) = β X Q = (Y Xβ) (Y Xβ) = Y Y β X Y Y Xβ +β X Xβ = Y Y 2β X Y +β X Xβ Taking derivative 2X Y +2X Xβ = 0 β β X Y = X Y β β X Xβ = 2X Xβ This means b = (X X) 1 X Y 5-16

Fitted Values The fitted values Ŷ = Xb = X(X X) 1 X Y Matrix H = X(X X) 1 X is called the hat matrix H is symmetric, i.e., H = H H is idempotent, i.e., HH = H Equivalently write Ŷ = HY Matrix H used in diagnostics (Chapter 9) 5-17

Residuals Residual matrix e = Y Ŷ = Y HY = (I H)Y e is a random vector E(e) = (I H) E(Y) = (I H)Xβ = Xβ Xβ = 0 σ 2 (e) = (I H) σ 2 (Y) (I H) = (I H)σ 2 I(I H) = (I H)σ 2 5-18

ANOVA Quadratic form defined as Y AY = i a ij Y i Y j j where A is symmetric n n matrix Sums of squares can be shown to be quadratic forms (page 207) Quadratic forms play a significant role in the theory of linear models when errors are normally distributed 5-19

Inference Vector b = (X X) 1 X Y = AY The mean and variance are E(b) = (X X) 1 X E(Y) = (X X) 1 X Xβ = β σ 2 (b) = A σ 2 (Y) A = A σ 2 I A = σ 2 AA = σ 2 (X X) 1 Thus, b is multivariate Normal(β, σ 2 (X X) 1 ) 5-20

Consider X h = [1 X h] Mean response Ŷ h = X h b E(Ŷ h ) = X h β Var(Ŷ h ) = X h σ2 (b) X h = σ 2 X h (X X) 1 X h Prediction of new observation σ 2 {pred} = σ 2 (1+X h (X X) 1 X h ) s 2 {pred} = MSE(1+X h (X X) 1 X h ) 5-21

Chapter Review Review of Matrices Regression Model in Matrix Form Calculations Using Matrices 5-22