The General Linear Model. How we re approaching the GLM. What you ll get out of this 8/11/16

Similar documents
The General Linear Model. Monday, Lecture 2 Jeanette Mumford University of Wisconsin - Madison

Common models and contrasts. Tuesday, Lecture 5 Jeanette Mumford University of Wisconsin - Madison

Common models and contrasts. Tuesday, Lecture 2 Jeane5e Mumford University of Wisconsin - Madison

Lecture 10: Powers of Matrices, Difference Equations

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Finding Limits Graphically and Numerically

Getting Started with Communications Engineering

ES-2 Lecture: More Least-squares Fitting. Spring 2017

MATH 54 - WORKSHEET 1 MONDAY 6/22

22A-2 SUMMER 2014 LECTURE 5

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations

Lesson 21 Not So Dramatic Quadratics

Basic Linear Algebra in MATLAB

Mathematics for Intelligent Systems Lecture 5 Homework Solutions

Lecture 9: Elementary Matrices

Chapter 2 Notes, Linear Algebra 5e Lay

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

5.9 Representations of Functions as a Power Series

Appendix A: Review of the General Linear Model

Eigenvalues and eigenvectors

Confidence intervals

base 2 4 The EXPONENT tells you how many times to write the base as a factor. Evaluate the following expressions in standard notation.

Week 3: Linear Regression

To factor an expression means to write it as a product of factors instead of a sum of terms. The expression 3x

5.2 Infinite Series Brian E. Veitch

Physics 6303 Lecture 3 August 27, 2018

6: Polynomials and Polynomial Functions

CS 124 Math Review Section January 29, 2018

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

MATH 310, REVIEW SHEET

Review Packet 1 B 11 B 12 B 13 B = B 21 B 22 B 23 B 31 B 32 B 33 B 41 B 42 B 43

MATH 310, REVIEW SHEET 2

Honors Advanced Mathematics Determinants page 1

Regression, Part I. - In correlation, it would be irrelevant if we changed the axes on our graph.

A Re-Introduction to General Linear Models (GLM)

Math 31 Lesson Plan. Day 5: Intro to Groups. Elizabeth Gillaspy. September 28, 2011

Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore

2. l = 7 ft w = 4 ft h = 2.8 ft V = Find the Area of a trapezoid when the bases and height are given. Formula is A = B = 21 b = 11 h = 3 A=

The General Linear Model in Functional MRI

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

Gradient. x y x h = x 2 + 2h x + h 2 GRADIENTS BY FORMULA. GRADIENT AT THE POINT (x, y)

Chapter 4. Solving Systems of Equations. Chapter 4

[y i α βx i ] 2 (2) Q = i=1

Conceptual Explanations: Simultaneous Equations Distance, rate, and time

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Algebra & Trig Review

Dot Products, Transposes, and Orthogonal Projections

Usually, when we first formulate a problem in mathematics, we use the most familiar

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

Lecture 2: Linear regression

Multilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 6

Lecture 4: Multivariate Regression, Part 2

Toss 1. Fig.1. 2 Heads 2 Tails Heads/Tails (H, H) (T, T) (H, T) Fig.2

Lecture 11: Linear Regression

Advanced Structural Analysis Prof. Devdas Menon Department of Civil Engineering Indian Institute of Technology, Madras

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

STAT 350: Geometry of Least Squares

Sec. 1 Simplifying Rational Expressions: +

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Algebra 8.6 Simple Equations

Algebra Year 10. Language

MAT 1302B Mathematical Methods II

Sometimes the domains X and Z will be the same, so this might be written:

Lecture 13: Simple Linear Regression in Matrix Format

Introduction to Algebra: The First Week

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

Lecture 1 Systems of Linear Equations and Matrices

Roberto s Notes on Linear Algebra Chapter 9: Orthogonality Section 2. Orthogonal matrices

Squaring and Unsquaring

A Re-Introduction to General Linear Models

MAT1302F Mathematical Methods II Lecture 19

A Primer on Statistical Inference using Maximum Likelihood

Pre-calculus is the stepping stone for Calculus. It s the final hurdle after all those years of

Lectures 5 & 6: Hypothesis Testing

Math 101: Course Summary

Algebra Review. Finding Zeros (Roots) of Quadratics, Cubics, and Quartics. Kasten, Algebra 2. Algebra Review

Rational Expressions & Equations

Lecture 18: Simple Linear Regression

Next is material on matrix rank. Please see the handout

Section 4.5. Matrix Inverses

Image Registration Lecture 2: Vectors and Matrices

TDA231. Logistic regression

Lesson 3-2: Solving Linear Systems Algebraically

Grades 7 & 8, Math Circles 10/11/12 October, Series & Polygonal Numbers

Continuity and One-Sided Limits

1 Review of the dot product

Ratios, Proportions, Unit Conversions, and the Factor-Label Method

PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

17 Neural Networks NEURAL NETWORKS. x XOR 1. x Jonathan Richard Shewchuk

Linear Algebra Review Part I: Geometry

5.6 Solving Equations Using Both the Addition and Multiplication Properties of Equality

Induction 1 = 1(1+1) = 2(2+1) = 3(3+1) 2

LECTURE 15: SIMPLE LINEAR REGRESSION I

The Haar Wavelet Transform: Compression and Reconstruction

Overview. Confidence Intervals Sampling and Opinion Polls Error Correcting Codes Number of Pet Unicorns in Ireland

Transcription:

8// The General Linear Model Monday, Lecture Jeanette Mumford University of Wisconsin - Madison How we re approaching the GLM Regression for behavioral data Without using matrices Understand least squares Using matrices With more than regressor, you need this What you ll get out of this What is least squares? What is a residual? How do you multiply a matrix and a vector? What are degrees of freedom? How do you obtain the estimates for the GLM using matrix math including the variance

Reaction Time (s) 8// Do you remember the equation for a line? Do you remember the equation for a line? y=b+mx Do you remember the equation for a line? RT i + + Age i Age

Reaction Time (s) Reaction Time (s) 8// Do you remember the equation for a line? population mean RT i = + Age i Age Do you remember the equation for a line? RT i = + Age i fit isn t perfect, so we must account for error Age The Model For the i th observational unit : The dependent (random) variable : Independent variable (not random), : Model parameters : Random error, how the observation deviates from the population mean

8// Fixed: Mean of, ( ) Random: Variability of It follows that the variance of is Fixed: Mean of, ( ) Random: Variability of It follows that the variance of is Fixed: Mean of, ( ) Random: Variability of It follows that the variance of is

8// Simple summary mean(y i ) var(y i ) Fitting the Model Reaction Time (s) Q: Which line fits the data best? Age Fitting the Model Reaction Time (s) Minimize the distance between the data and the line (error). Absolute distance? squared distance? Error term Age 5

8// Least Squares Minimize squared differences Minimize Least Squares Minimize squared differences Minimize Works out nicely distribution-wise You can use calculus to get the estimates Bias and Variance

8// Bias and Variance high bias / low variance low bias / high variance high bias / high variance low bias / low variance Bias and Variance high bias / low variance low bias / high variance high bias / high variance low bias / low variance Bias and Variance high bias / low variance low bias / high variance high bias / high variance low bias / low variance

8// Bias and Variance high bias / low variance low bias / high variance high bias / high variance low bias / low variance Property of least squares Gauss Markov Assumptions error has mean things aren t correlated variance is the same for all observations Unbiased and have lowest variance among all unbiased estimators Property of least squares Gauss Markov Assumptions error has mean things aren t correlated variance is the same for all observations Unbiased and have lowest variance among all unbiased estimators 8

8// What about the variance? We also need an estimate for Start with the sums of squared error Divide by the appropriate degrees of freedom # of independent pieces of information - # parameters in model What about the variance? We also need an estimate for Start with the sums of squared error Divide by the appropriate degrees of freedom # of independent pieces of information - # parameters in model Take away up to this point We use typically use least squares estimation to estimate the betas in regression Gauss Markov Minimum variance among all unbiased estimators 9

8// You don t need to do regression this way Anybody ever hear of using absolute error instead of squared error? Do you know the context?? Anybody ever hear of purposely biasing (!) an estimate in order to reduce variability? Do you know the context? Multiple Linear Regression Add more parameters to the model Time for linear algebra! is a x matrix Matrices Row index Column index

8// Matrices Square matrix- Same # of rows and columns Vector- column(row) vector has column(row) Matrices Transpose: or. Swap columns and rows. Element-wise addition and subtraction Matrices Multiplication: Trickier Number of columns of first matrix must match number of rows of second matrix

8// Matrices Multiplication Matrices Multiplication x+ Matrices Multiplication x+x=

8// Matrices Multiplication x+x= Multiplication Matrices You try it out B @ C A =??

8// You try it out B B @ C C A You try it out B B @ C C A B B @ C C A You try it out B B @ C C A

8// Matrix Inverse Denoted Only for square matrices Only exists if matrix is full rank All columns (rows) are linearly independent, but I ll spare the details Rank Deficient Matrices *column=column column+column=column Rank Deficient Matrices *column=column column+column=column SPM can handle rank deficiency, if the contrasts are specified properly 5

8// Can you find the rank deficiency?? Inverting rectangular matrix If the columns *only* are linearly independent, then is invertible Pseudoinverse: Inverting a rank-deficient matrix I m not going to get into the nitty gritty pinv() in MATLAB does it You *must* be careful if you go this route on your own Could accidentally do something silly, but SPM seems to have built in controls so you don t

8// Inverting a rank-deficient matrix Back to linear regression (nx) (nx) (x) (nx) Back to linear regression (nx) (nx) (x) (nx)

8// Back to linear regression (nx) (nx) (x) (nx) Back to linear regression (nx) (nx) (x) (nx) Back to linear regression (nx) (nx) (x) (nx) 8

8// Viewing the Design Matrix Look at the actual numbers M F age Viewing the Design Matrix Look at in image representation Darker=smaller # M F age Multiple Linear Regression The distribution of Y is a multivariate Normal 9

8// Multiple Linear Regression is really easy to derive Multiple Linear Regression is really easy to derive Same as least squares, but much easier to understand and write code for thanks linear algebra! Multiple Linear Regression where N=length(Y) p=length( )

8// Multiple Linear Regression where N=length(Y) p=length( ) Or Rank(X) Statistical Properties So the estimate is unbiased But we don t know Take away Matrix algebra makes GLM estimation waaay easier Make sure you re comfortable multiplying a matrix and a vector Handy to know how to estimate the parameters

8// Ask me some questions Recall GLM is flexible One Sample T Test Two sample T Test Paired T test ANOVA ANCOVA What do the models look like? Do you know the answers? What is least squares? What is a residual? How do you multiply a matrix and a vector? What are degrees of freedom? How do you obtain the estimates for the GLM using matrix math including the variance

8// Let s set up some simple models - sample t- test - sample t- test With contrasts! - sample t- test Y = + Y Y. Y N - sample t- test 5 =. 5 +

8// Y Y. - sample t- test 5 =. 5 + Y N MulBply out the right hand side Y Y. - sample t- test 5 = Y N MulBply out the right hand side. 5 + Y Y. - sample t- test 5 = Y N MulBply out the right hand side. 5 +

8// But why is it the mean and not something else? But why is it the mean and not something else? Because we re using least squares! I m going to write this out 5

8// Two- sample t- test There are at least ways I can think of parameterizing this! Start with the easiest a person is either in group or in group Y i = {sub i in group?} + {sub i in group?} + Two- sample t- test There are at least ways I can think of parameterizing this! Start with the easiest a person is either in group or in group Y i = G i + G i + group indicator variables Two- sample t- test Group Group Y Y Y Y Y 5 Y = 5 apple 5 +

8// Two- sample t- test Y Y Y Y Y 5 Y = 5 + 5 Two- sample t- test Y Y Y Y Y 5 Y = 5 + 5 mean for Group mean for Group Two- sample t- test (another way) Now you do it. Unwrap what this means Y i = + {subject i in Group } +

8// Contrasts Contrasts are vectors that pull out what we d like to test Using the two sample t- test from the first example, we might test Is the mean of G larger than? Is the mean of G larger than? Is the mean of G > G? General idea Take your contrast statement and get it to look like something > Figure out the vector, c, such that cb = something Y Y Y Y Y 5 Is group >? = 5 Y We ve already established the first beta represents group s mean c = [, ] pulls out the first beta apple 5 + 8

8// Y Y Y Y Y 5 Is group >? = 5 Y We ve already established the second beta represents group s mean c = [, ] pulls out the first beta apple 5 + Is group > group? Y First, get something > group group > c = [, - ] Y Y Y Y Y 5 = 5 apple 5 = apple + Is group > group? Y First, get something > group group > c = [, - ] Y Y Y Y Y 5 = 5 apple 5 = apple + 9

8// Is group > group? Y First, get something > group group > c = [, - ] Y Y Y Y Y 5 = 5 apple 5 = apple + Is group > group? Y First, get something > group group > c = [, - ] Y Y Y Y Y 5 = 5 apple 5 = apple + Can you do this for the second setup of the - sample t- test?

8// Take away Did you feel preey confident with the last example? Yes = Yay! No = Ask quesbons! That s it! I m guessing we need to at least stretch our legs right now