Linear Mixed Models: Methodology and Algorithms

Similar documents
The Laplace Expansion Theorem: Computing the Determinants and Inverses of Matrices

Linear Mixed Models: Methodology and Algorithms

MATRICES AND MATRIX OPERATIONS

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

Chapter 3. Basic Properties of Matrices

ENGR-1100 Introduction to Engineering Analysis. Lecture 21

2 Constructions of manifolds. (Solutions)

Lectures on Linear Algebra for IT

Evaluating Determinants by Row Reduction

Matrix Algebra: Definitions and Basic Operations

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

SPRING OF 2008 D. DETERMINANTS

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Review from Bootcamp: Linear Algebra

Determinant of a Matrix

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

sum of squared error.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

ENGR-1100 Introduction to Engineering Analysis. Lecture 21. Lecture outline

MATHEMATICS. Units Topics Marks I Relations and Functions 10

1111: Linear Algebra I

Algebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix

MATH 2030: EIGENVALUES AND EIGENVECTORS

Linear Algebra Primer

M. Matrices and Linear Algebra

MATH 2050 Assignment 8 Fall [10] 1. Find the determinant by reducing to triangular form for the following matrices.

Linear Algebra Solutions 1

Math 240 Calculus III

a11 a A = : a 21 a 22

Chapter 3. Determinants and Eigenvalues

1 Determinants. 1.1 Determinant

Properties of the Determinant Function

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Determinants. Recall that the 2 2 matrix a b c d. is invertible if

Penalized least squares versus generalized least squares representations of linear mixed models

STAT Advanced Bayesian Inference

1 Matrices and Systems of Linear Equations

A Likelihood Ratio Test

Determinants. Beifang Chen

The Cayley Hamilton Theorem

MTH 464: Computational Linear Algebra

Matrices and Determinants

II. Determinant Functions

Chapter 6. Orthogonality

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Outcomes

Math Linear Algebra Final Exam Review Sheet

ECON 186 Class Notes: Linear Algebra

2 b 3 b 4. c c 2 c 3 c 4

The Determinant: a Means to Calculate Volume

Inverses and Determinants

7.3. Determinants. Introduction. Prerequisites. Learning Outcomes

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

MATRIX DETERMINANTS. 1 Reminder Definition and components of a matrix

Chapter 2. Square matrices

Solution Set 7, Fall '12

Determinants. Samy Tindel. Purdue University. Differential equations and linear algebra - MA 262

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Linear Algebra and Vector Analysis MATH 1120

Components and change of basis

Determinant: 3.2 Evaluation of Determinant with Elementary

Introduction to Matrices

Linear Algebra Primer

Some Notes on Linear Algebra

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Elementary maths for GMT

Section 1.6. M N = [a ij b ij ], (1.6.2)

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

. D Matrix Calculus D 1

Dr. Allen Back. Sep. 8, 2014

MATRICES The numbers or letters in any given matrix are called its entries or elements

For comments, corrections, etc Please contact Ahnaf Abbas: Sharjah Institute of Technology. Matrices Handout #8.

Linear Algebra Primer

The Multivariate Gaussian Distribution [DRAFT]

Review of Vectors and Matrices

TOPIC III LINEAR ALGEBRA

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Matrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Linear Systems and Matrices

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

Mathematics. EC / EE / IN / ME / CE. for

5.3 Determinants and Cramer s Rule

How to Use Calculus Like a Physicist

Lecture 8: Determinants I

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Do not copy, quote, or cite without permission LECTURE 4: THE GENERAL LISREL MODEL

Calculus from Graphical, Numerical, and Symbolic Points of View, 2e Arnold Ostebee & Paul Zorn

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Matrix Differentiation

THE ADJOINT OF A MATRIX The transpose of this matrix is called the adjoint of A That is, C C n1 C 22.. adj A. C n C nn.

Math Camp Notes: Linear Algebra I

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

CHAPTER 6. Direct Methods for Solving Linear Systems

Chapter 4. Determinants

Fundamentals of Engineering Analysis (650163)

Transcription:

Linear Mixed Models: Methodology and Algorithms David M. Allen University of Kentucky March 6, 2017

C Topics from Calculus Maximum likelihood and REML estimation involve the minimization of a negative log likelihood function with respect to the parameters. Minimization is done using Newton s method which requires the derivatives of the negative log likelihood function. The likelihood function is a function of functions so the chain rule facilitates determining the derivatives. This Chapter gives the required calculus.

Section C.1 The Chain Rule The chain rule is a fundamental rule of differentiation. Some physicists claim that the chain rule is the most important theorem in all of mathematics (Hubbard and Hubbard [1]). C.1 29

Statement of the Chain Rule Let g be a m-vector of functions having n arguments and ƒ be a p-vector of functions having m arguments. If g is differentiable at and ƒ is differentiable at g(), then the composition ƒ (g()) is differentiable at, and its derivative is given by d d ƒ (g()) = = d d ƒ () =g() d d g() = C.1 30

Section C.2 Some Matrix Derivatives The likelihood function of the multivariate normal distribution involves both the determinant and inverse of the variance matrix. For application of Newton s algorithm, the first and second derivatives of the determinant and inverse of the variance matrix are required. Finding these derivatives is the subject of this section. C.2 31

Notation Any letter could be used to represent the matrix under discussion. I will use V since its use is in the context of a variance matrix. Assume the elements of V are functions of a vector parameters θ. This is emphasized by writing it as V(θ). C.2 32

Derivative of an Inverse Matrix The derivative of an inverse is the simpler of the two cases considered. The defining relationship between a matrix and its inverse is V(θ)V 1 (θ) = The derivative of both sides with respect to the kth element of θ is d d V(θ) V 1 (θ) + V(θ) V 1 (θ) = 0 θ k θ k Straightforward manipulation gives d d V 1 (θ) = V 1 (θ) V(θ) V 1 (θ) θ k θ k (C.2.1) C.2 33

Analogies There are two analogies to one variable calculus in the derivative above: derivative of a product and implicit differentiation. C.2 34

The Derivative of a Determinant For discussion of the derivative of a determinant, I temporarily suspend the dependence of V on θ and derive the derivative with respect it an element of V. The derivative with respect to an element of θ is brought in via the chain rule. C.2 35

The Cofactor of a Matrix For a square matrix V, the minor of its (, j) entry is defined to be the determinant of the submatrix obtained by removing from V its th row and jth column, and it is denoted by M j. Then C j = ( 1) +j M j is called the (, j) cofactor of V. C.2 36

The Determinant of a Matrix The determinant of V(n n) may be expressed as for any fixed j, or det(v) = det(v) = n j C j =1 n j C j for any fixed. These are called column and row expansions respectively. j=1 C.2 37

For a matrix Cofactor Matrix 11 12 1n 21 22 2n V =...... n1 n2 nn the cofactor matrix is C 11 C 12 C 1n C 21 C 22 C 2n C =....... C n1 C n2 C nn C.2 38

The Adjugate and Inverse Matrices The adjugate matrix is the transpose of the cofactor matrix dj(v) = C t. Provided det(v) = 0 the inverse of V is V 1 = 1 det(v) dj(v) C.2 39

The Derivative With Respect to an Element The derivative of the logarithm of the determinant of V with respect to an element is d d j log(det(v)) = 1 det(v) C j = V 1 j C.2 40

Derivative with Respect to θ Bring back the dependency of V on θ and apply the chain rule: d 1 log(det(v(θ)) = dθ k det(v) = n n =1 j=1 = tr V 1 n n d j (θ) C j dθ =1 j=1 k V 1 d V(θ) j dθ k d V t (θ) dθ k j (C.2.2) C.2 41

Exercises The following exercises depend on these quantities: 5.4 1 2 5.9 6 1 Y =, A =, 5.5 3 3 1.1 2 1 θ = [θ 1, θ 2 ] t, V(θ) = AA t θ 1 + θ 2. Y is a realization of N 4 (0, V(θ)). Exercise C.2.1. Let L(θ; Y) represent negative two times the log likelihood function of θ. Give the expression for L(θ; Y). You may ignore the constant term. Exercise C.2.2. Find the derivative of L(θ; Y) with respect to θ 1 evaluated at [θ 1, θ 2 ] = [3, 2]. C.2 42

Exercise C.2.3. Find the derivative of L(θ; Y) with respect to θ 2 evaluated at [θ 1, θ 2 ] = [3, 2]. C.2 43

References [1] John H. Hubbard and Barbara Burke Hubbard. Vector Calculus, Linear Algebra, and Differential Forms. Fifth edition. Ithaca, New York: Matrix Editions, 1015. C.2 44