Course topics (tentative) The role of random effects
|
|
- Martin Freeman
- 5 years ago
- Views:
Transcription
1 Course topics (tentative) random effects linear mixed models analysis of variance frequentist likelihood-based inference (MLE and REML) prediction Bayesian inference The role of random effects Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark February 5, / 33 2 / 33 Outline for today examples of data sets. analysis of variance multivariate normal distribution density for multivariate normal distribution Reflectance (colour) measurements for samples of cardboard (egg trays) (project at Department of Biotechnology, Chemistry and Environmental Engineering) Four replications at same position on each cardboard reflectance For five cardboards: four replications at four positions at each cardboard Reflektans / nr. Pap.nr. Colour variation between/within cardboards? 4 / 33
2 Orthodontic growth curves Distance between pituitary and the pterygomaxillary fissure for children of age 8-14 Distance versus age: Orthodontic growth curves Distance between pituitary and the pterygomaxillary fissure for children of age 8-14 Distance versus age grouped Distance versus age: according to child distance distance distance age age age Different intercepts for different children 5 / 33 6 / 33 Compression of mats for cows Electricity consumption and temperature in Sweden Compression vs. pressure for two brands of mats compression pressure Non-linear relation y = ab + cx d b + x d, Random variation between mats of same brand, small measurement noise. Temperature and electricity consumption: elforbrug temperatur Index Index 7 / 33 8 / 33
3 Consumption versus temperature and residuals from linear regression: elforbrug + D * temperatur Serial correlation! fejlled y_t Model for reflectances: one-way anova Four replications on each cardboard reflectance nr. Models: Y ij = ξ + ǫ ij i = 1,..., k j = 1,..., m 9 / / 33 Model for reflectances: one-way anova Model for reflectances: one-way anova Models: Models: Four replications on each cardboard Y ij = ξ + ǫ ij i = 1,..., k j = 1,..., m Four replications on each cardboard Y ij = ξ + ǫ ij i = 1,..., k j = 1,..., m reflectance or Y ij = ξ + α i + ǫ ij where ξ and α i are fixed unknown parameters and ǫ ij stochastic noise reflectance or Y ij = ξ + α i + ǫ ij where ξ and α i are fixed unknown parameters and ǫ ij stochastic noise or Y ij = ξ + U i + ǫ ij nr nr. where U i are random variables Which is most relevant? 11 / / 33
4 One role of random effects: parsimonious and population relevant models With fixed effects α i : many parameters (ξ, σ 2, α 2,..., α 34 ). Parameters α 2,..., α 34 not interesting as they just represent intercepts for specific card boards which are individually not of interest. With random effects: just three parameters (ξ, σ 2 = Varǫ ij and τ 2 = VarU i ). Hence parsimonious model. Variance parameters interesting for several reasons. Second role of random effects: quantify sources of variation Quantify sources of variation (e.g. quality control): is pulp for paper production too heterogeneous? With random effects model Y ij = ξ + U i + ǫ ij we have decomposition of variance: VarY ij = VarU i + Varǫ ij = τ 2 + σ 2 Hence we can quantify variation between (τ 2 ) cardboard pieces and within (σ 2 ) cardboard. 13 / / 33 Third role: modeling of covariance and correlation Ratio γ = τ 2 /σ 2 is signal to noise. Proportion of variance is called intra-class correlation. τ 2 σ 2 + τ 2 = γ γ + 1 High proportion of between cardboard variance leads to high correlation (next slide). Covariances: 0 i i Cov[Y ij, Y i j ] = VarU i i = i, j j VarU i + Varǫ ij i = i, j = j Correlations: 0 i i Corr[Y ij, Y i j ] = τ 2 /(σ 2 + τ 2 ) i = i, j j 1 i = i, j = j That is, observations for same cardboard are correlated! Correct modeling of correlation is important for correct evaluation of uncertainty. 15 / / 33
5 Fourth role: correct evalution of uncertainty Suppose we wish to estimate ξ = EY ij. Due to correlation, observations on same cardboard to some extent redundant. Estimate is empirical average ˆξ = Ȳ. Evaluation of VarȲ : Model erroneously ignoring variation between cardboards Y ij = ξ + ǫ ij Varǫ ij = σ 2 total = σ2 + τ 2 Naive variance expression is VarȲ = σ2 total n = σ2 + τ 2 km With first model, variance is underestimated! For VarȲ 0 is it enough that km? Correct model with random cardboard effects Y ij = ξ + U i + ǫ ij, VarU i = τ 2, Varǫ ij = σ 2 Correct variance expression is VarȲ = τ 2 k + σ2 km 17 / 33 Balanced one-way ANOVA (analysis of variance) Decomposition of empirical variance/sums of squares (i = 1,..., k, j = 1,..., m): SST = ij (Y ij Ȳ ) 2 = ij Expected sums of squares: Moment-based estimates: ˆσ 2 = (Y ij Ȳ i ) 2 +m i ESSE = k(m 1)σ 2 ESSB = m(k 1)τ 2 + (k 1)σ 2 SSE k(m 1) NB: ˆτ 2 may be negative. ˆτ 2 = SSB/(k 1) ˆσ2 m (Ȳ i Ȳ ) 2 = SSE+SSB Slightly less nice formulae in the unbalanced case (Thm 5.3 in M&T) 18 / 33 Design of experiment - one-way ANOVA Two levels of random effects Suppose Y ij is outcome of analysis of jth sample in ith lab, τ 2 = 2 variance between labs and σ 2 = 1 measurement variance. Suppose we want to analyze in total 100 samples. What is then the optimal number of labs that makes VarȲ minimal? Suppose instead we have available 5000 kr., there is an initial cost of 200 kr. for each lab and subsequently 10 kr. for the analysis of each sample. What is then the optimal number k of labs that gives the smallest VarȲ? Exercise! For five cardboards we have 4 replications at 4 positions. Hierarchical model (nested random effects) Y ipj = ξ + U i + U ip + ǫ ipj VarY ipj = τ 2 + ω 2 + σ 2 19 / / 33
6 Covariance structure for nested random effects model Correlation structure for nested random effects model Y ipj = ξ + U i + U ip + ǫ ipj 0 i l τ 2 i = l, p q same card Cov(Y ipj, Y lqk ) = τ 2 + ω 2 i = l, p = q same card and pos. τ 2 + ω 2 + σ 2 i = 1, p = q, k = j (VarY ipj ) Y ipj = ξ + U i + U ip + ǫ ipj 0 i l τ 2 i = l, p q Corr(Y ipj, Y lqk ) = σ 2 +ω 2 +τ 2 τ 2 +ω 2 i = l, p = q σ 2 +ω 2 +τ 2 1 i = 1, p = q, k = j 21 / / 33 Model for longitudinal growth data Summary - role of random effects Y ij = ξ i + η i x ij + ǫ ij i: child, j: time. Random intercepts and slopes? Correlated error ǫ ij? e.g. AR(1) ǫ ij = φǫ i(j 1) + ν ij Random effects models are useful for: quantifying different sources of variation appropriate modelling of correlation ( correct evalution of uncertainty of parameter estimates) sometimes individual subject specific effects not of interest - the population variation of effects more interesting more parsimonious models (replace many systematic effects by just one variance) 23 / / 33
7 Likelihood based inference Multivariate normal distribution Let µ R p and Σ a p p symmetric and positive semidefinite p p matrix. Spectral decomposition of Σ: Need probability density Normal distribution allows to build tractable and (reasonably) flexible models Σ = OΛO T where O orthonormal matrix (columns=eigen vectors) and Λ diagonal matrix of eigen values. Definition: a p-variate random p 1 vector Y is p-variate normal N p (µ, Σ) if Y is distributed as µ + OΛ 1/2 Z where Z = (Z 1,..., Z n ) is a vector of independent standard normal random variables. 25 / / 33 Geometric interpretation and PCA (principal component analysis) Assume µ = 0. Λ 1/2 : scaling. O rotation. I.e. Y scaled and rotated Z. Starting with Y : Let v i ith eigen vector in O. Then v T i Y = λ i Z i (projection on v i ) ith principal component with variance λ i. Principal components are independent (uncorrelated if Y not normal). Since λ 1 > λ 2,..., λ p, v1 T Y explains most of the variance in Y ( i VarY i = i λ i) etc. v i is called loading vector for ith PC. 27 / 33 Equivalent definitions: Definition: a random p 1 vector Y is p-variate normal with mean µ and covariance matrix Σ if a T Y is univariate normal with mean a T µ and variance a T Σa for any a R p. Definition: a random p 1 vector Y is p-variate normal with mean µ and covariance matrix Σ if Y has characteristic function φ(t) = E exp(it T Y ) = exp(it T µ t T Σt/2). From the last definition it follows easily that for any m p matrix A. Y N p (µ, Σ) AY N m (Aµ, AΣA T ) Since Vara T Y = a T Σa 0 it follows that Σ must be positive semi-definite. NB: the distribution of a random vector is uniquely determined by the characteristic function. 28 / 33
8 Density of multivariate normal Suppose Z i are independent standard normal. Then Z = (Z 1,..., Z p ) N p (0, I ) with joint density f Z (z 1,..., z p ) = (2π) n/2 exp( z 2 /2) Suppose further that Y N p (µ, Σ) where Σ positive definite. Then Σ = LL T for some invertible matrix L (Cholesky or spectral decomposition). Thus Y µ + LZ and Jacobian of transformation is L = Σ 1/2. By multivariate transformation theorem f Y (y 1,..., y p ) = (2π) n/2 Σ 1/2 exp( 1 2 (y µ)t Σ 1 (y µ)) Density if Σ not positive definite Suppose Σ has rank r < p. Then λ r+1 = = λ p = 0 and Y µ lives on subspace L = span{v 1,..., v r } R p. Possible to define density function on L with respect to r -dimensional Lebesgue measure on L. It is of the form: (2π) r/2 ( r λ i ) 1/2 exp[ 1 2 (y µ)t Σ (y µ)] i=1 where Σ is generalized inverse Σ = Odiag(λ 1 1,..., λ 1 r, 0,..., 0)O T (we will see several examples of this during the course) 29 / / 33 Exercises 1. Show results regarding covariances and correlations in one-way and two-level ANOVA (slide 16 and 21-22). 2. compute VarȲ for one way ANOVA. 3. compute expectations of SSB and SSE in one-way ANOVA. 4. fit linear models for the orthodontic growth curves with subject specific intercepts. Draw histograms of the fitted intercepts (can be extracted using coef()). Check residuals from the model. Also fit model with common intercept and plot residuals against subject. 5. compute covariance and correlation structure of observations from linear models with random intercepts and random slopes: More exercises 6. show Y N p (µ, Σ) AY N m (Aµ, AΣA T ) 7. show that the three definitions of multivariate normal distribution are equivalent 8. solve the design of experiment problem regarding optimal choice of number of laboratories for the one-way ANOVA Y ij = α + U i + βx ij + V i x ij + ǫ ij where U i N(0, τu 2 ) and V i N(0, τv 2 ) and Corr(U i, V i ) = ρ (while (U i, V i ) and (U i, V i ) independent when i i ). What can you say about the variance structure of Y ij? 31 / / 33
9 One more exercise 9. In Bayesian statistics the following pairwise-difference density is often used as a smoothing prior (promotes similar values of y i and y i 1 ): f (y 1,..., y n ) exp( 1 2 n (y i y i 1 ) 2 ) 9.1 Find Q playing the role as Σ 1 so that the above is of the form of a multivariate Gaussian density. Is Q invertible? 9.2 Can you find a square-root of Q? 9.3 Find the eigen space for the eigen value The pairwise-difference density can be viewed as a density on an n 1-dimensional subspace L. Characterize L. i=2 33 / 33
Outline. Statistical inference for linear mixed models. One-way ANOVA in matrix-vector form
Outline Statistical inference for linear mixed models Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark general form of linear mixed models examples of analyses using linear mixed
More informationPrediction. is a weighted least squares estimate since it minimizes. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark
Prediction Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark March 22, 2017 WLS and BLUE (prelude to BLUP) Suppose that Y has mean β and known covariance matrix V (but Y need not
More informationLinear models. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. October 5, 2016
Linear models Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark October 5, 2016 1 / 16 Outline for today linear models least squares estimation orthogonal projections estimation
More informationOutline for today. Two-way analysis of variance with random effects
Outline for today Two-way analysis of variance with random effects Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark Two-way ANOVA using orthogonal projections March 4, 2018 1 /
More informationWLS and BLUE (prelude to BLUP) Prediction
WLS and BLUE (prelude to BLUP) Prediction Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 21, 2018 Suppose that Y has mean X β and known covariance matrix V (but Y need
More informationOutline for today. Maximum likelihood estimation. Computation with multivariate normal distributions. Multivariate normal distribution
Outline for today Maximum likelihood estimation Rasmus Waageetersen Deartment of Mathematics Aalborg University Denmark October 30, 2007 the multivariate normal distribution linear and linear mixed models
More informationPh.D. Qualifying Exam Friday Saturday, January 6 7, 2017
Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a
More informationBayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017
Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R
More informationPart 6: Multivariate Normal and Linear Models
Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More informationMLES & Multivariate Normal Theory
Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate
More information* Tuesday 17 January :30-16:30 (2 hours) Recored on ESSE3 General introduction to the course.
Name of the course Statistical methods and data analysis Audience The course is intended for students of the first or second year of the Graduate School in Materials Engineering. The aim of the course
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1
MA 575 Linear Models: Cedric E Ginestet, Boston University Mixed Effects Estimation, Residuals Diagnostics Week 11, Lecture 1 1 Within-group Correlation Let us recall the simple two-level hierarchical
More informationAnalysis of variance using orthogonal projections
Analysis of variance using orthogonal projections Rasmus Waagepetersen Abstract The purpose of this note is to show how statistical theory for inference in balanced ANOVA models can be conveniently developed
More informationHierarchical Modeling for Univariate Spatial Data
Hierarchical Modeling for Univariate Spatial Data Geography 890, Hierarchical Bayesian Models for Environmental Spatial Data Analysis February 15, 2011 1 Spatial Domain 2 Geography 890 Spatial Domain This
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More informationMaximum Likelihood Estimation
Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter
More informationMixed models with correlated measurement errors
Mixed models with correlated measurement errors Rasmus Waagepetersen October 9, 2018 Example from Department of Health Technology 25 subjects where exposed to electric pulses of 11 different durations
More informationRegression Review. Statistics 149. Spring Copyright c 2006 by Mark E. Irwin
Regression Review Statistics 149 Spring 2006 Copyright c 2006 by Mark E. Irwin Matrix Approach to Regression Linear Model: Y i = β 0 + β 1 X i1 +... + β p X ip + ɛ i ; ɛ i iid N(0, σ 2 ), i = 1,..., n
More information5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.
88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix
More informationANOVA: Analysis of Variance - Part I
ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationChapter 14. Linear least squares
Serik Sagitov, Chalmers and GU, March 5, 2018 Chapter 14 Linear least squares 1 Simple linear regression model A linear model for the random response Y = Y (x) to an independent variable X = x For a given
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More informationRepeated Measures ANOVA Multivariate ANOVA and Their Relationship to Linear Mixed Models
Repeated Measures ANOVA Multivariate ANOVA and Their Relationship to Linear Mixed Models EPSY 905: Multivariate Analysis Spring 2016 Lecture #12 April 20, 2016 EPSY 905: RM ANOVA, MANOVA, and Mixed Models
More informationModels for Clustered Data
Models for Clustered Data Edps/Psych/Soc 589 Carolyn J Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Spring 2019 Outline Notation NELS88 data Fixed Effects ANOVA
More informationModels for Clustered Data
Models for Clustered Data Edps/Psych/Stat 587 Carolyn J Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 2017 Outline Notation NELS88 data Fixed Effects ANOVA
More informationPh.D. Qualifying Exam Friday Saturday, January 3 4, 2014
Ph.D. Qualifying Exam Friday Saturday, January 3 4, 2014 Put your solution to each problem on a separate sheet of paper. Problem 1. (5166) Assume that two random samples {x i } and {y i } are independently
More informationBayesian Linear Regression
Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationStatistics of stochastic processes
Introduction Statistics of stochastic processes Generally statistics is performed on observations y 1,..., y n assumed to be realizations of independent random variables Y 1,..., Y n. 14 settembre 2014
More informationTopic 12 Overview of Estimation
Topic 12 Overview of Estimation Classical Statistics 1 / 9 Outline Introduction Parameter Estimation Classical Statistics Densities and Likelihoods 2 / 9 Introduction In the simplest possible terms, the
More informationChapter 1. Linear Regression with One Predictor Variable
Chapter 1. Linear Regression with One Predictor Variable 1.1 Statistical Relation Between Two Variables To motivate statistical relationships, let us consider a mathematical relation between two mathematical
More informationLecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011
Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector
More informationStat 579: Generalized Linear Models and Extensions
Stat 579: Generalized Linear Models and Extensions Linear Mixed Models for Longitudinal Data Yan Lu April, 2018, week 15 1 / 38 Data structure t1 t2 tn i 1st subject y 11 y 12 y 1n1 Experimental 2nd subject
More informationMIT Spring 2015
Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)
More informationIntroduction to Design and Analysis of Computer Experiments
Introduction to Design and Analysis of Thomas Santner Department of Statistics The Ohio State University October 2010 Outline Empirical Experimentation Empirical Experimentation Physical Experiments (agriculture,
More informationLecture 11: Regression Methods I (Linear Regression)
Lecture 11: Regression Methods I (Linear Regression) Fall, 2017 1 / 40 Outline Linear Model Introduction 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear
More informationStatistics 203: Introduction to Regression and Analysis of Variance Course review
Statistics 203: Introduction to Regression and Analysis of Variance Course review Jonathan Taylor - p. 1/?? Today Review / overview of what we learned. - p. 2/?? General themes in regression models Specifying
More informationLinear models and their mathematical foundations: Simple linear regression
Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:
More information,..., θ(2),..., θ(n)
Likelihoods for Multivariate Binary Data Log-Linear Model We have 2 n 1 distinct probabilities, but we wish to consider formulations that allow more parsimonious descriptions as a function of covariates.
More informationSimple Linear Regression
Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.
More information3d scatterplots. You can also make 3d scatterplots, although these are less common than scatterplot matrices.
3d scatterplots You can also make 3d scatterplots, although these are less common than scatterplot matrices. > library(scatterplot3d) > y par(mfrow=c(2,2)) > scatterplot3d(y,highlight.3d=t,angle=20)
More informationMultivariate Analysis and Likelihood Inference
Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density
More informationHierarchical Modelling for Univariate Spatial Data
Hierarchical Modelling for Univariate Spatial Data Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department
More informationStatistics 910, #5 1. Regression Methods
Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known
More informationLatent Variable Models for Binary Data. Suppose that for a given vector of explanatory variables x, the latent
Latent Variable Models for Binary Data Suppose that for a given vector of explanatory variables x, the latent variable, U, has a continuous cumulative distribution function F (u; x) and that the binary
More informationLecture 11: Regression Methods I (Linear Regression)
Lecture 11: Regression Methods I (Linear Regression) 1 / 43 Outline 1 Regression: Supervised Learning with Continuous Responses 2 Linear Models and Multiple Linear Regression Ordinary Least Squares Statistical
More informationStat 502 Design and Analysis of Experiments General Linear Model
1 Stat 502 Design and Analysis of Experiments General Linear Model Fritz Scholz Department of Statistics, University of Washington December 6, 2013 2 General Linear Hypothesis We assume the data vector
More informationMultilevel Models in Matrix Form. Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2
Multilevel Models in Matrix Form Lecture 7 July 27, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Today s Lecture Linear models from a matrix perspective An example of how to do
More informationMatrix Approach to Simple Linear Regression: An Overview
Matrix Approach to Simple Linear Regression: An Overview Aspects of matrices that you should know: Definition of a matrix Addition/subtraction/multiplication of matrices Symmetric/diagonal/identity matrix
More informationStatistical Distribution Assumptions of General Linear Models
Statistical Distribution Assumptions of General Linear Models Applied Multilevel Models for Cross Sectional Data Lecture 4 ICPSR Summer Workshop University of Colorado Boulder Lecture 4: Statistical Distributions
More informationSampling Distributions
Merlise Clyde Duke University September 8, 2016 Outline Topics Normal Theory Chi-squared Distributions Student t Distributions Readings: Christensen Apendix C, Chapter 1-2 Prostate Example > library(lasso2);
More informationBayesian Inference. Chapter 4: Regression and Hierarchical Models
Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Advanced Statistics and Data Mining Summer School
More informationPrincipal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17
Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into
More informationMultiple Linear Regression for the Supervisor Data
for the Supervisor Data Rating 40 50 60 70 80 90 40 50 60 70 50 60 70 80 90 40 60 80 40 60 80 Complaints Privileges 30 50 70 40 60 Learn Raises 50 70 50 70 90 Critical 40 50 60 70 80 30 40 50 60 70 80
More informationDefault Priors and Effcient Posterior Computation in Bayesian
Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature
More information01 Probability Theory and Statistics Review
NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement
More informationSTAT 730 Chapter 1 Background
STAT 730 Chapter 1 Background Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 27 Logistics Course notes hopefully posted evening before lecture,
More informationCorrelated Data: Linear Mixed Models with Random Intercepts
1 Correlated Data: Linear Mixed Models with Random Intercepts Mixed Effects Models This lecture introduces linear mixed effects models. Linear mixed models are a type of regression model, which generalise
More informationMultivariate Linear Regression Models
Multivariate Linear Regression Models Regression analysis is used to predict the value of one or more responses from a set of predictors. It can also be used to estimate the linear association between
More informationRandom Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30
Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices
More informationMultivariate Regression (Chapter 10)
Multivariate Regression (Chapter 10) This week we ll cover multivariate regression and maybe a bit of canonical correlation. Today we ll mostly review univariate multivariate regression. With multivariate
More informationGibbs Sampling in Linear Models #2
Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling
More informationOutline for today. Computation of the likelihood function for GLMMs. Likelihood for generalized linear mixed model
Outline for today Computation of the likelihood function for GLMMs asmus Waagepetersen Department of Mathematics Aalborg University Denmark likelihood for GLMM penalized quasi-likelihood estimation Laplace
More informationFigure 36: Respiratory infection versus time for the first 49 children.
y BINARY DATA MODELS We devote an entire chapter to binary data since such data are challenging, both in terms of modeling the dependence, and parameter interpretation. We again consider mixed effects
More informationAn Introduction to Bayesian Linear Regression
An Introduction to Bayesian Linear Regression APPM 5720: Bayesian Computation Fall 2018 A SIMPLE LINEAR MODEL Suppose that we observe explanatory variables x 1, x 2,..., x n and dependent variables y 1,
More informationMultiple Linear Regression
Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models Mixed effects models - Part II Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
More informationGaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012
Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More information1. The Multivariate Classical Linear Regression Model
Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The
More informationIntroduction to Within-Person Analysis and RM ANOVA
Introduction to Within-Person Analysis and RM ANOVA Today s Class: From between-person to within-person ANOVAs for longitudinal data Variance model comparisons using 2 LL CLP 944: Lecture 3 1 The Two Sides
More informationBayesian Inference. Chapter 4: Regression and Hierarchical Models
Bayesian Inference Chapter 4: Regression and Hierarchical Models Conchi Ausín and Mike Wiper Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative
More informationA (Brief) Introduction to Crossed Random Effects Models for Repeated Measures Data
A (Brief) Introduction to Crossed Random Effects Models for Repeated Measures Data Today s Class: Review of concepts in multivariate data Introduction to random intercepts Crossed random effects models
More informationA Practitioner s Guide to Cluster-Robust Inference
A Practitioner s Guide to Cluster-Robust Inference A. C. Cameron and D. L. Miller presented by Federico Curci March 4, 2015 Cameron Miller Cluster Clinic II March 4, 2015 1 / 20 In the previous episode
More informationMatrices and Multivariate Statistics - II
Matrices and Multivariate Statistics - II Richard Mott November 2011 Multivariate Random Variables Consider a set of dependent random variables z = (z 1,..., z n ) E(z i ) = µ i cov(z i, z j ) = σ ij =
More informationFitting Linear Statistical Models to Data by Least Squares: Introduction
Fitting Linear Statistical Models to Data by Least Squares: Introduction Radu Balan, Brian R. Hunt and C. David Levermore University of Maryland, College Park University of Maryland, College Park, MD Math
More informationExperimental Design and Data Analysis for Biologists
Experimental Design and Data Analysis for Biologists Gerry P. Quinn Monash University Michael J. Keough University of Melbourne CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv I I Introduction 1 1.1
More informationStat 579: Generalized Linear Models and Extensions
Stat 579: Generalized Linear Models and Extensions Linear Mixed Models for Longitudinal Data Yan Lu April, 2018, week 12 1 / 34 Correlated data multivariate observations clustered data repeated measurement
More informationFunctional Latent Feature Models. With Single-Index Interaction
Generalized With Single-Index Interaction Department of Statistics Center for Statistical Bioinformatics Institute for Applied Mathematics and Computational Science Texas A&M University Naisyin Wang and
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationStat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2
Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate
More informationGaussian Processes 1. Schedule
1 Schedule 17 Jan: Gaussian processes (Jo Eidsvik) 24 Jan: Hands-on project on Gaussian processes (Team effort, work in groups) 31 Jan: Latent Gaussian models and INLA (Jo Eidsvik) 7 Feb: Hands-on project
More informationThe linear model is the most fundamental of all serious statistical models encompassing:
Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x
More informationVector autoregressions, VAR
1 / 45 Vector autoregressions, VAR Chapter 2 Financial Econometrics Michael Hauser WS17/18 2 / 45 Content Cross-correlations VAR model in standard/reduced form Properties of VAR(1), VAR(p) Structural VAR,
More informationDYNAMIC AND COMPROMISE FACTOR ANALYSIS
DYNAMIC AND COMPROMISE FACTOR ANALYSIS Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu Many parts are joint work with Gy. Michaletzky, Loránd Eötvös University and G. Tusnády,
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationHierarchical Modelling for Univariate Spatial Data
Spatial omain Hierarchical Modelling for Univariate Spatial ata Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A.
More informationvariability of the model, represented by σ 2 and not accounted for by Xβ
Posterior Predictive Distribution Suppose we have observed a new set of explanatory variables X and we want to predict the outcomes ỹ using the regression model. Components of uncertainty in p(ỹ y) variability
More informationAn Introduction to Multilevel Models. PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 25: December 7, 2012
An Introduction to Multilevel Models PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 25: December 7, 2012 Today s Class Concepts in Longitudinal Modeling Between-Person vs. +Within-Person
More information10. Time series regression and forecasting
10. Time series regression and forecasting Key feature of this section: Analysis of data on a single entity observed at multiple points in time (time series data) Typical research questions: What is the
More informationLecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012
Lecture 3: Linear Models Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector of observed
More informationI L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
Introduction Edps/Psych/Stat/ 584 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c Board of Trustees,
More information