The General Linear Model. Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison

Similar documents
The General Linear Model. How we re approaching the GLM. What you ll get out of this 8/11/16

Common models and contrasts. Tuesday, Lecture 5 Jeanette Mumford University of Wisconsin - Madison

22A-2 SUMMER 2014 LECTURE 5

Common models and contrasts. Tuesday, Lecture 2 Jeane5e Mumford University of Wisconsin - Madison

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

Numerical Methods Lecture 2 Simultaneous Equations

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

MATH 54 - WORKSHEET 1 MONDAY 6/22

Appendix A: Review of the General Linear Model

Review Packet 1 B 11 B 12 B 13 B = B 21 B 22 B 23 B 31 B 32 B 33 B 41 B 42 B 43

Lecture 10: Powers of Matrices, Difference Equations

Numerical Methods Lecture 2 Simultaneous Equations

Next is material on matrix rank. Please see the handout

Lecture 2 Systems of Linear Equations and Matrices, Continued

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

The General Linear Model in Functional MRI

ES-2 Lecture: More Least-squares Fitting. Spring 2017

Chapter 2 Notes, Linear Algebra 5e Lay

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N

Solving Quadratic & Higher Degree Equations

MATH 310, REVIEW SHEET 2

Dylan Zwick. Fall Ax=b. EAx=Eb. UxrrrEb

Image Registration Lecture 2: Vectors and Matrices

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

Math 250B Midterm I Information Fall 2018

ESS 265 Spring Quarter 2005 Time Series Analysis: Linear Regression

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.

STAT 350: Geometry of Least Squares

Lectures 5 & 6: Hypothesis Testing

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

Solving Quadratic & Higher Degree Equations

Chapter 4. Solving Systems of Equations. Chapter 4

Lecture 13: Simple Linear Regression in Matrix Format

Lecture 6: Geometry of OLS Estimation of Linear Regession

MATH 310, REVIEW SHEET

Math 1320, Section 10 Quiz IV Solutions 20 Points

Last Time. x + 3y = 6 x + 2y = 1. x + 3y = 6 y = 1. 2x + 4y = 8 x 2y = 1. x + 3y = 6 2x y = 7. Lecture 2

Lecture 9 SLR in Matrix Form

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

An Introduction to Matrix Algebra

Basic Linear Algebra in MATLAB

Row Reduction

MAT1302F Mathematical Methods II Lecture 19

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

MAT 1302B Mathematical Methods II

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

Lesson 21 Not So Dramatic Quadratics

Honors Advanced Mathematics Determinants page 1

Economics 620, Lecture 4: The K-Varable Linear Model I

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

LECTURES 4/5: SYSTEMS OF LINEAR EQUATIONS

Math Lecture 3 Notes

Multiple Linear Regression

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2

Sec. 1 Simplifying Rational Expressions: +

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

POL 213: Research Methods

Statistical Techniques II

Getting Started with Communications Engineering

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

2 Systems of Linear Equations

If A is a 4 6 matrix and B is a 6 3 matrix then the dimension of AB is A. 4 6 B. 6 6 C. 4 3 D. 3 4 E. Undefined

Matrices and Vectors

1 Last time: determinants

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 6

POLYNOMIAL EXPRESSIONS PART 1

Rational Expressions & Equations

[y i α βx i ] 2 (2) Q = i=1

2. l = 7 ft w = 4 ft h = 2.8 ft V = Find the Area of a trapezoid when the bases and height are given. Formula is A = B = 21 b = 11 h = 3 A=

Properties of the least squares estimates

3 Fields, Elementary Matrices and Calculating Inverses

Dot Products, Transposes, and Orthogonal Projections

MATH 320, WEEK 7: Matrices, Matrix Operations

Final Exam Practice Problems Answers Math 24 Winter 2012

Linear Algebra Review Part I: Geometry

Simple Linear Regression Model & Introduction to. OLS Estimation

Lecture 9: Elementary Matrices

Linear Algebra Review. Fei-Fei Li

MS&E 226: Small Data. Lecture 6: Bias and variance (v2) Ramesh Johari

STAT 151A: Lab 1. 1 Logistics. 2 Reference. 3 Playing with R: graphics and lm() 4 Random vectors. Billy Fang. 2 September 2017

Steps in Regression Analysis

CH 54 PREPARING FOR THE QUADRATIC FORMULA

L2-7 Some very stylish matrix decompositions for solving Ax = b 10 Oct 2015

Lecture 4: Multivariate Regression, Part 2

Second Midterm Exam April 14, 2011 Answers., and

Linear Algebra for Beginners Open Doors to Great Careers. Richard Han

Linear Algebra, Vectors and Matrices

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

Linear Algebra Handout

The Haar Wavelet Transform: Compression and Reconstruction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

6.4 Division of Polynomials. (Long Division and Synthetic Division)

Math 416, Spring 2010 Matrix multiplication; subspaces February 2, 2010 MATRIX MULTIPLICATION; SUBSPACES. 1. Announcements

MATH 118 FINAL EXAM STUDY GUIDE

Addition and subtraction: element by element, and dimensions must match.

MS&E 226: Small Data

AGEC 621 Lecture 16 David Bessler

Introduction to Matrix Algebra and the Multivariate Normal Distribution

Transcription:

The General Linear Model Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison

How we re approaching the GLM Regression for behavioral data Without using matrices Understand least squares Using matrices With more than 1 regressor, you need this

What you ll get out of this What is least squares? What is a residual? How do you multiply a matrix and a vector? What are degrees of freedom? How do you obtain the estimates for the GLM using matrix math including the variance

Do you remember the equation for a line?

Do you remember the equation for a line? y=b+mx

Reaction Time (s) Do you remember the equation for a line? RT i + 0 + Age i 1 Age

Reaction Time (s) Do you remember the equation for a line? population mean RT i = 0 + Age i 1 Age

Reaction Time (s) Do you remember the equation for a line? RT i = 0 + Age i 1 fit isn t perfect, so we must account for error Age

The Model For the i th observational unit : The dependent (random) variable : Independent variable (not random), : Model parameters : Random error, how the observation deviates from the population mean

Simple summary mean(y i ) var(y i )

Reaction Time (s) Fitting the Model Q: Which line fits the data best? Age

Fitting the Model Reaction Time (s) Minimize the distance between the data and the line (error). Absolute distance? squared distance? Error term Age

Least Squares Minimize squared differences Minimize

Least Squares Minimize squared differences Minimize Works out nicely distribution-wise Easy minimization problem

Bias and Variance

Bias and Variance high bias / low variance low bias / high variance high bias / high variance low bias / low variance

Bias and Variance high bias / low variance low bias / high variance high bias / high variance low bias / low variance

Bias and Variance high bias / low variance low bias / high variance high bias / high variance low bias / low variance

Bias and Variance high bias / low variance low bias / high variance high bias / high variance low bias / low variance

Property of least squares Gauss Markov Assumptions error has mean 0 things aren t correlated variance is the same for all observations Unbiased and have lowest variance among all unbiased estimators

Property of least squares Gauss Markov Assumptions error has mean 0 things aren t correlated variance is the same for all observations Unbiased and have lowest variance among all unbiased estimators

What about the variance? We also need an estimate for Start with the sums of squared error Divide by the appropriate degrees of freedom # of independent pieces of information - # parameters in model

What about the variance? We also need an estimate for Start with the sums of squared error Divide by the appropriate degrees of freedom # of independent pieces of information - # parameters in model

Take away up to this point We use typically use least squares estimation to estimate the betas in regression Gauss Markov Minimum variance among all unbiased estimators

You don t need to do regression this way Anybody ever hear of using absolute error instead of squared error? Do you know the context?? Anybody ever hear of purposely biasing (!) an estimate in order to reduce variability? Do you know the context?

Multiple Linear Regression Add more parameters to the model Time for linear algebra!

Matrices is a 2x3 matrix Row index Column index

Matrices Square matrix- Same # of rows and columns Vector- column(row) vector has 1 column(row)

Matrices Transpose: or. Swap columns and rows. Element-wise addition and subtraction

Matrices Multiplication: Trickier Number of columns of first matrix must match number of rows of second matrix

Multiplication Matrices

Matrices Multiplication 1x4+

Matrices Multiplication 1x4+2x1=6

Matrices Multiplication 1x2+2x4=10

Multiplication Matrices

You try it out 1 2 3 4 0 B @ 1 1 1 1 1 C A =??

You try it out 0 B @ 1 1 1 1 1 C A 1 2 3 4

You try it out 0 B @ 1 1 1 1 1 C A 0 B @ 1 2 3 4 1 C A

You try it out 0 B @ 1 1 1 2 1 3 1 4 1 C A 0 1

Matrix Inverse Denoted Only for square matrices Only exists if matrix is full rank All columns (rows) are linearly independent, but I ll spare the details

Rank Deficient Matrices 2*column1=column3 column1+column2=column3

Rank Deficient Matrices 2*column1=column3 column1+column2=column3 SPM can handle rank deficiency, if the contrasts are specified properly

Can you find the rank deficiency??

Inverting rectangular matrix If the columns *only* are linearly independent, then is invertible Pseudoinverse:

Back to linear regression.................. (nx1) (nx4) (4x1) (nx1)

Back to linear regression.................. (nx1) (nx4) (4x1) (nx1)

Back to linear regression.................. (nx1) (nx4) (4x1) (nx1)

Back to linear regression.................. (nx1) (nx4) (4x1) (nx1)

Back to linear regression.................. (nx1) (nx4) (4x1) (nx1)

Viewing the Design Matrix Look at the actual numbers M F age

Viewing the Design Matrix Look at in image representation Darker=smaller # M F age

Multiple Linear Regression The distribution of Y is a multivariate Normal 0 0 0 0

Multiple Linear Regression is really easy to derive

Multiple Linear Regression is really easy to derive Same as least squares, but much easier to understand and write code for thanks linear algebra!

Multiple Linear Regression where N=length(Y) p=length( )

Multiple Linear Regression where N=length(Y) p=length( ) Or Rank(X)

Statistical Properties So the estimate is unbiased But we don t know

Take away Matrix algebra makes GLM estimation waaay easier Make sure you re comfortable multiplying a matrix and a vector Handy to know how to estimate the parameters

Ask me some questions

Do you know the answers? What is least squares? What is a residual? How do you multiply a matrix and a vector? What are degrees of freedom? How do you obtain the estimates for the GLM using matrix math including the variance

Questions??