Outline for today. Computation of the likelihood function for GLMMs. Likelihood for generalized linear mixed model
|
|
- Penelope Scott
- 6 years ago
- Views:
Transcription
1 Outline for today Computation of the likelihood function for GLMMs asmus Waagepetersen Department of Mathematics Aalborg University Denmark Computation of likelihood function. Gaussian quadrature 2. Monte Carlo methods Newton-aphson EM-algorithm case study of non-linear mixed effects model April 30, 202 /27 2/27 Generalized linear mixed effects models Consider stochastic variable Y = (Y,...,Y n ) and random effects U. Two step formulation of GLMM: U N(0,Σ). Given realization u of U, Y i independent and each follows density f i (y u) with mean µ i = g (η i ) and linear predictor η = Xβ +Zu. I.e. conditional on U, Y i follows a generalized linear model. NB: GLMM specified in terms of marginal density of U and conditional density of Y given U. But the likelihood is the marginal density of f(y) which can be hard to evaluate! Likelihood for generalized linear mixed model For normal linear mixed models we could compute the marginal distribution of Y directly as a multivariate normal. That is, f(y) is a density of a multivariate normal distribution. For a generalized linear mixed model it is difficult to evaluate the integral: f(y) = f(y,u)du = f(y u)f(u)du m m since f(y u)f(u) is a very complex function. 3/27 4/27
2 Adaptive Gaussian Quadrature - one random effect Gauss-Hermite quadrature (numerical integration) is n f(x)dx w i f(x i ) where φ is the standard normal density and (x i,w i ),i =,n are certain arguments and weights which can be looked up in a table. i= Advantage f(y u)f(u) φ(u;µ LP,σLP 2 ) = f(y σ LPx +µ LP )f(σ LP x +µ LP ) x = (u µ LP )/σ LP close to constant (f(y)) hence adaptive G-H quadrature very accurate. We can replace with = whenever f is a polynomial of degree 2n or less. Adaptive Gauss-Hermite quadrature: f(y u)f(u) f(y u)f(u)du φ(u;µ LP,σLP 2 )φ(u;µ LP,σLP 2 )du = f(y σlp x +µ LP )f(σ LP x +µ LP ) σ LP dx (change of variable: x = (u µ LP )/σ LP ) 5/27 GH scheme with n = 5: x w (x s are roots of Hermite polynomial computed using ghq in library glmmml). (GH schmes for n = 5 and n = 0 available on web page) 6/27 Prediction of random effects for GLMM Computation of conditional expectations (prediction) Conditional mean E[U Y = y] = uf(u y)du is minimum mean square error predictor, i.e. E(U Û)2 E[U Y = y] = f(y) u f(y u)f(u) du = f(y) (σ LP x+µ LP ) f(y σ LPx +µ LP )f(σ LP x +µ LP ) σ LP dx is minimal with Û = H(Y) where H(y) = E[U Y = y] Note: Difficult to analytically evaluate E[U Y = y] = uf(y u)f(u)/f(y)du (σ LP x +µ LP ) f(y σ LPx +µ LP )f(σ LP x +µ LP ) σ LP behaves like a first order polynomial in x - hence GH still accurate. 7/27 8/27
3 Difficult cases for numerical integration - dimension m > Monte Carlo computation of likelihood for GLMM correlated random effects crossed random effects nested random effects Not possible to factorize likelihood into low-dimensional integrals Number of quadrature points k m where k is number of quadrature points for D hence numerical quadrature may not be feasible. Likelihood function is an expectation: L(θ) = f(y;θ) = f(y u;β)f(u;τ 2 )du = E τ 2f(y U;β) m Use Monte Carlo simulations to approximate expectation. NB: also applicable in high dimensions Alternatives: PQL and Laplace-approximation or Monte Carlo computation. 9/27 0/27 Importance sampling Consider Z f and suppose we wish to evaluate Eh(Z) where h(z) has large variance. Suppose we can find density g so that Then h(z)f(z) g(z) where Y g. const and h(z)f(z) > 0 g(z) > 0 h(z)f(z) Eh(Z) = g(z)dz = E h(y)f(y) g(z) g(y) Importance sampling for GLMM g( ) probability density on m. L(θ) = L(θ) L IS,h (θ) = M f(y u;β)f(u;τ 2 )du = E f(y V;β)f(V;τ2 ) g(v) f(y V l ;β)f(v l ;τ 2 ) g(v l ) f(y u;β)f(u;τ 2 ) g(u)du = g(u) where V g( ). where V l g( ),l =,...,M Note variance of h(y)f(y)/g(y) small so estimate Eh(Z) n has small Monte Carlo error. n i= h(y i )f(y i ) g(y i ) /27 Find h so Var f(y V;β)f(V;τ2 ) g(v) small. VarL IS,h (θ) < if f(y v;θ)f(v;θ)/g(v) bounded (i.e. use g( ) with heavy tails). 2/27
4 Simple Monte Carlo: g(u) = f(u;τ 2 ) L(θ) = M f(y u;β)f(u;τ 2 )du = E τ 2f(y U;β) L SMC (θ) = f(y U l ;β) where U l N(0,τ 2 ) independent Monte Carlo variance: Var(L SMC (θ)) = M Varf(y U ;β) Estimate Varf(y U ;β) using empirical variance estimate based on f(y U l ;β), l =,...,M: M (f(y U l ;β) L SMC (θ)) 2 Often Varf(y U ;β) is large so large M is needed. Increasing dimension leads to worse performance (useless in high dimensions) 3/27 4/27 Possibility: Consider fixed θ 0 : Possibility: Note f(y u,β)f(u;τ 2 ) = f(y;θ) = L(θ) = const f(u y;θ) Laplace: U Y = y N ( µ LP,σLP 2 ) Use g( ) density for N ( µ LP,σ 2 LP) or tν ( µlp,σ 2 LP) -distribution: so small variance. Simulation straightforward. f(y u,β)f(u;τ 2 ) g(u) const Note: Monte Carlo version of adaptive Gaussian quadrature. g(u) = f(u y,θ 0 ) = f(y u;θ 0 )f(u;θ 0 )/L(θ 0 ) Then f(y u;β)f(u;τ 2 ) f(y u;β)f(u;τ 2 ) L(θ) = g(u)du = L(θ 0 ) g(u) f(y u;β 0 )f(u;τ0 2)f(u y,θ 0)du = [ f(y U;β)f(U;τ 2 ) ] L(θ 0 )E θ0 f(y U;β 0 )f(u;τ0 2) Y = y L(θ) [ f(y U;β)f(U;τ 2 L(θ 0 ) = E ) ] θ 0 f(y U;β 0 )f(u;τ0 2) Y = y So we can estimate ratio L(θ)/L(θ 0 ) where L(θ 0 ) is unknown constant. This suffices for finding MLE: L(θ) arg maxl(θ) = argmax θ θ L(θ 0 ) M f(y U l ;β)f(u l ;τ 2 ) f(y U l ;β 0 )f(u l ;τ 2 0 ) 5/27 where U l f(u y;θ 0 ) 6/27
5 Simulation of random variables Direct methods exists for many standard distributions (normal, binomial, t, etc.: rnorm(), rbinom(), rt() etc.) Suppose f is a non-standard density but f(z) Kg(z) for some constant K and standard density g. Then we may apply rejection sampling:. Generate Y g and W unif[0,]. 2. If W f(y) Kg(Y) return Y (accept); otherwise go to (reject). Note probability of accept is /K. Proof of rejection sampling: compute P(Y y accept) = P(Y y W f(y) Kg(Y) ) If f is high-dimensional density it may be hard to find g with small K so rejection sampling mainly useful in small dimensions. MCMC is then useful alternative (later). 7/27 8/27 Prediction of U using conditional simulation Conditional simulation of U Y = y using rejection sampling Compute Monte Carlo estimate of E(U Y = y) using conditional simulations of U Y = y or importance sampling. E(U Y = y) = M U m, U m f(u y) m= We can also evaluate e.g. P(U i > c y) or P(U i > U l,l i Y) etc. Note f(y u;β 0 )f(u;τ 2 0)/f(y;θ 0 ) K t ν (u;µ LP,σ 2 LP ) for some constant K. ejection sampling:. Generate V t ν (µ LP,σLP 2 ) and W Unif(]0,[) 2. eturn V if W f(v y;θ 0 )/(K t(v;µ LP,σLP 2 )); otherwise go to. 9/27 20/27
6 Maximization of likelihood using Newton-aphson Let Then and V θ (y,u) = d dθ logf(y,u θ) u(θ) = d dθ logl(θ) = E θ[v θ (y,u) Y = y] j(θ) = d2 dθ T dθ logl(θ) = ( E θ [dv θ (y,u)/dθ T Y = y] Var θ [V θ (y,u) Y = y] ) Newton-aphson: θ l+ = θ l +j(θ l ) u(θ l ) All unknown expectations and variances can be estimated using the previous numerical integration or Monte Carlo methods! EM-algorithm Given current estimate θ l :. (E) compute Q(θ l,θ) = E θl [logf(y,u θ) Y = y] 2. (M) θ l+ = argmax θ Q(θ l,θ). For LNMM E-step can be computed explicitly (Jiang page 65) but seems pointless as likelihood is available in closed form. For GLMMs (E) step needs numerical integration or Monte Carlo. Convergence of EM-algorithm can be quite slow. Maximization of likelihood using Newton-aphson seems better alternative. 2/27 22/27 Overview of Jiang page 65-7 Case study: non-linear mixed effects model for cow mats data Compression vs. pressure for two brands of mats Page (Monte Carlo) EM for linear mixed model and threshold model Non-linear relation Page 68-69: Newton-ahpson and importance sampling. Page 70-7: Monte Carlo EM with adaptive importance sampling (t-distribution + Laplace approximation) compression y = mmf(x) = ab +cxd b +x d, andom variation between mats of same brand, small measurement noise pressure 23/27 24/27
7 Simulated data from the two models: Estimation of non-linear model with fixed effects: Fixed effects: residual standard error 0.72 With random effects: residual standard error 0.7 nlsfit=nls(nedtryk~mmf(tryk,a,b,c,d),start= c(a=0.,b=.670,c=80,d=0.6),data=mattressdata) Estimation of non-linear model with a, b, c as random effects: nlmerfit=nlmer(nedtryk~mmfnlmer(tryk,a,b,c,d)~(a matno)+ (b matno)+(c matno),mattressdata,start=c(a=0.04,b=.64,c=74,d=0.64)) compression compression pressure pressure 25/27 Std. err. for a,b,c are 0.64,0.05 and 0.4 andom effects model gives much better representation of variability in data. 26/27 NB: to assess influence of variability of different parameters we need to look at partial derivatives (sensitivities) wrt. these parameters. Exercises. Exercise 4.3 on page 229 in Jiang. 2. exercises on exercise-sheets exercises6.pdf and exercises7.pdf. 3. Check formulas for u(θ) and j(θ). How can these expression be computed using importance sampling? 27/27
Outline for today. Computation of the likelihood function for GLMMs. Likelihood for generalized linear mixed model
Outline for today Computation of the likelihood function for GLMMs asmus Waagepetersen Department of Mathematics Aalborg University Denmark likelihood for GLMM penalized quasi-likelihood estimation Laplace
More informationNormalising constants and maximum likelihood inference
Normalising constants and maximum likelihood inference Jakob G. Rasmussen Department of Mathematics Aalborg University Denmark March 9, 2011 1/14 Today Normalising constants Approximation of normalising
More informationStat 451 Lecture Notes Numerical Integration
Stat 451 Lecture Notes 03 12 Numerical Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 5 in Givens & Hoeting, and Chapters 4 & 18 of Lange 2 Updated: February 11, 2016 1 / 29
More informationMixed models in R using the lme4 package Part 7: Generalized linear mixed models
Mixed models in R using the lme4 package Part 7: Generalized linear mixed models Douglas Bates University of Wisconsin - Madison and R Development Core Team University of
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationMixed models in R using the lme4 package Part 5: Generalized linear mixed models
Mixed models in R using the lme4 package Part 5: Generalized linear mixed models Douglas Bates 2011-03-16 Contents 1 Generalized Linear Mixed Models Generalized Linear Mixed Models When using linear mixed
More informationOutline. Mixed models in R using the lme4 package Part 5: Generalized linear mixed models. Parts of LMMs carried over to GLMMs
Outline Mixed models in R using the lme4 package Part 5: Generalized linear mixed models Douglas Bates University of Wisconsin - Madison and R Development Core Team UseR!2009,
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 16 Advanced topics in computational statistics 18 May 2017 Computer Intensive Methods (1) Plan of
More informationConditional Distributions
Conditional Distributions The goal is to provide a general definition of the conditional distribution of Y given X, when (X, Y ) are jointly distributed. Let F be a distribution function on R. Let G(,
More informationStat 451 Lecture Notes Monte Carlo Integration
Stat 451 Lecture Notes 06 12 Monte Carlo Integration Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapter 6 in Givens & Hoeting, Chapter 23 in Lange, and Chapters 3 4 in Robert & Casella 2 Updated:
More informationComputational methods for mixed models
Computational methods for mixed models Douglas Bates Department of Statistics University of Wisconsin Madison March 27, 2018 Abstract The lme4 package provides R functions to fit and analyze several different
More informationEM Algorithm. Expectation-maximization (EM) algorithm.
EM Algorithm Outline: Expectation-maximization (EM) algorithm. Examples. Reading: A.P. Dempster, N.M. Laird, and D.B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, J. R. Stat. Soc.,
More informationMixed models in R using the lme4 package Part 5: Generalized linear mixed models
Mixed models in R using the lme4 package Part 5: Generalized linear mixed models Douglas Bates Madison January 11, 2011 Contents 1 Definition 1 2 Links 2 3 Example 7 4 Model building 9 5 Conclusions 14
More informationPenalized least squares versus generalized least squares representations of linear mixed models
Penalized least squares versus generalized least squares representations of linear mixed models Douglas Bates Department of Statistics University of Wisconsin Madison April 6, 2017 Abstract The methods
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationStochastic Simulation Variance reduction methods Bo Friis Nielsen
Stochastic Simulation Variance reduction methods Bo Friis Nielsen Applied Mathematics and Computer Science Technical University of Denmark 2800 Kgs. Lyngby Denmark Email: bfni@dtu.dk Variance reduction
More informationMixed models in R using the lme4 package Part 4: Theory of linear mixed models
Mixed models in R using the lme4 package Part 4: Theory of linear mixed models Douglas Bates 8 th International Amsterdam Conference on Multilevel Analysis 2011-03-16 Douglas Bates
More informationChapter 4 - Fundamentals of spatial processes Lecture notes
Chapter 4 - Fundamentals of spatial processes Lecture notes Geir Storvik January 21, 2013 STK4150 - Intro 2 Spatial processes Typically correlation between nearby sites Mostly positive correlation Negative
More informationPattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods
Pattern Recognition and Machine Learning Chapter 11: Sampling Methods Elise Arnaud Jakob Verbeek May 22, 2008 Outline of the chapter 11.1 Basic Sampling Algorithms 11.2 Markov Chain Monte Carlo 11.3 Gibbs
More informationExponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger
Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm by Korbinian Schwinger Overview Exponential Family Maximum Likelihood The EM Algorithm Gaussian Mixture Models Exponential
More informationCourse topics (tentative) The role of random effects
Course topics (tentative) random effects linear mixed models analysis of variance frequentist likelihood-based inference (MLE and REML) prediction Bayesian inference The role of random effects Rasmus Waagepetersen
More informationBayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017
Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries
More informationApproximate Likelihoods
Approximate Likelihoods Nancy Reid July 28, 2015 Why likelihood? makes probability modelling central l(θ; y) = log f (y; θ) emphasizes the inverse problem of reasoning y θ converts a prior probability
More informationMaximum Likelihood Estimation
University of Pavia Maximum Likelihood Estimation Eduardo Rossi Likelihood function Choosing parameter values that make what one has observed more likely to occur than any other parameter values do. Assumption(Distribution)
More informationSB1a Applied Statistics Lectures 9-10
SB1a Applied Statistics Lectures 9-10 Dr Geoff Nicholls Week 5 MT15 - Natural or canonical) exponential families - Generalised Linear Models for data - Fitting GLM s to data MLE s Iteratively Re-weighted
More informationML estimation: Random-intercepts logistic model. and z
ML estimation: Random-intercepts logistic model log p ij 1 p = x ijβ + υ i with υ i N(0, συ) 2 ij Standardizing the random effect, θ i = υ i /σ υ, yields log p ij 1 p = x ij β + σ υθ i with θ i N(0, 1)
More informationCS Tutorial 5 - Differential Geometry I - Surfaces
236861 Numerical Geometry of Images Tutorial 5 Differential Geometry II Surfaces c 2009 Parameterized surfaces A parameterized surface X : U R 2 R 3 a differentiable map 1 X from an open set U R 2 to R
More informationStat 5102 Final Exam May 14, 2015
Stat 5102 Final Exam May 14, 2015 Name Student ID The exam is closed book and closed notes. You may use three 8 1 11 2 sheets of paper with formulas, etc. You may also use the handouts on brand name distributions
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationInference in non-linear time series
Intro LS MLE Other Erik Lindström Centre for Mathematical Sciences Lund University LU/LTH & DTU Intro LS MLE Other General Properties Popular estimatiors Overview Introduction General Properties Estimators
More informationOutline for today. Maximum likelihood estimation. Computation with multivariate normal distributions. Multivariate normal distribution
Outline for today Maximum likelihood estimation Rasmus Waageetersen Deartment of Mathematics Aalborg University Denmark October 30, 2007 the multivariate normal distribution linear and linear mixed models
More informationLecture Particle Filters. Magnus Wiktorsson
Lecture Particle Filters Magnus Wiktorsson Monte Carlo filters The filter recursions could only be solved for HMMs and for linear, Gaussian models. Idea: Approximate any model with a HMM. Replace p(x)
More informationSystem Identification, Lecture 4
System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2016 F, FRI Uppsala University, Information Technology 13 April 2016 SI-2016 K. Pelckmans
More informationLikelihood, MLE & EM for Gaussian Mixture Clustering. Nick Duffield Texas A&M University
Likelihood, MLE & EM for Gaussian Mixture Clustering Nick Duffield Texas A&M University Probability vs. Likelihood Probability: predict unknown outcomes based on known parameters: P(x q) Likelihood: estimate
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationExercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters
Exercises Tutorial at ICASSP 216 Learning Nonlinear Dynamical Models Using Particle Filters Andreas Svensson, Johan Dahlin and Thomas B. Schön March 18, 216 Good luck! 1 [Bootstrap particle filter for
More informationMCMC Sampling for Bayesian Inference using L1-type Priors
MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling
More informationCSC 2541: Bayesian Methods for Machine Learning
CSC 2541: Bayesian Methods for Machine Learning Radford M. Neal, University of Toronto, 2011 Lecture 10 Alternatives to Monte Carlo Computation Since about 1990, Markov chain Monte Carlo has been the dominant
More informationThe Prediction of Random Effects in Generalized Linear Mixed Model
The Prediction of Random Effects in Generalized Linear Mixed Model Wenxue Ge Department of Mathematics and Statistics University of Victoria, Canada Nov 20, 2017 @University of Victoria Outline Background
More informationCS281A/Stat241A Lecture 22
CS281A/Stat241A Lecture 22 p. 1/4 CS281A/Stat241A Lecture 22 Monte Carlo Methods Peter Bartlett CS281A/Stat241A Lecture 22 p. 2/4 Key ideas of this lecture Sampling in Bayesian methods: Predictive distribution
More informationIntroducing the Normal Distribution
Department of Mathematics Ma 3/13 KC Border Introduction to Probability and Statistics Winter 219 Lecture 1: Introducing the Normal Distribution Relevant textbook passages: Pitman [5]: Sections 1.2, 2.2,
More informationLikelihood Ratio Test in High-Dimensional Logistic Regression Is Asymptotically a Rescaled Chi-Square
Likelihood Ratio Test in High-Dimensional Logistic Regression Is Asymptotically a Rescaled Chi-Square Yuxin Chen Electrical Engineering, Princeton University Coauthors Pragya Sur Stanford Statistics Emmanuel
More informationStatistical Estimation
Statistical Estimation Use data and a model. The plug-in estimators are based on the simple principle of applying the defining functional to the ECDF. Other methods of estimation: minimize residuals from
More informationSolutions for Econometrics I Homework No.3
Solutions for Econometrics I Homework No3 due 6-3-15 Feldkircher, Forstner, Ghoddusi, Pichler, Reiss, Yan, Zeugner April 7, 6 Exercise 31 We have the following model: y T N 1 X T N Nk β Nk 1 + u T N 1
More informationOptimization. The value x is called a maximizer of f and is written argmax X f. g(λx + (1 λ)y) < λg(x) + (1 λ)g(y) 0 < λ < 1; x, y X.
Optimization Background: Problem: given a function f(x) defined on X, find x such that f(x ) f(x) for all x X. The value x is called a maximizer of f and is written argmax X f. In general, argmax X f may
More informationBayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems
Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.
More informationRejection sampling - Acceptance probability. Review: How to sample from a multivariate normal in R. Review: Rejection sampling. Weighted resampling
Rejection sampling - Acceptance probability Review: How to sample from a multivariate normal in R Goal: Simulate from N d (µ,σ)? Note: For c to be small, g() must be similar to f(). The art of rejection
More informationLecture 1 Bayesian inference
Lecture 1 Bayesian inference olivier.francois@imag.fr April 2011 Outline of Lecture 1 Principles of Bayesian inference Classical inference problems (frequency, mean, variance) Basic simulation algorithms
More informationAssociation studies and regression
Association studies and regression CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar Association studies and regression 1 / 104 Administration
More informationLecture 9 STK3100/4100
Lecture 9 STK3100/4100 27. October 2014 Plan for lecture: 1. Linear mixed models cont. Models accounting for time dependencies (Ch. 6.1) 2. Generalized linear mixed models (GLMM, Ch. 13.1-13.3) Examples
More informationSpring 2012 Math 541A Exam 1. X i, S 2 = 1 n. n 1. X i I(X i < c), T n =
Spring 2012 Math 541A Exam 1 1. (a) Let Z i be independent N(0, 1), i = 1, 2,, n. Are Z = 1 n n Z i and S 2 Z = 1 n 1 n (Z i Z) 2 independent? Prove your claim. (b) Let X 1, X 2,, X n be independent identically
More informationGeneralized Linear Models for Non-Normal Data
Generalized Linear Models for Non-Normal Data Today s Class: 3 parts of a generalized model Models for binary outcomes Complications for generalized multivariate or multilevel models SPLH 861: Lecture
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian
More informationFast Numerical Methods for Stochastic Computations
Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:
More informationMonte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan
Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes
More informationBayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework
HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for
More informationA short introduction to INLA and R-INLA
A short introduction to INLA and R-INLA Integrated Nested Laplace Approximation Thomas Opitz, BioSP, INRA Avignon Workshop: Theory and practice of INLA and SPDE November 7, 2018 2/21 Plan for this talk
More informationBayesian Inference and MCMC
Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the
More informationAnalog Circuit Verification by Statistical Model Checking
16 th Asian Pacific Design Automation ti Conference Analog Circuit Verification by Statistical Model Checking Author Ying-Chih Wang (Presenter) Anvesh Komuravelli Paolo Zuliani Edmund M. Clarke Date 2011/1/26
More informationStat 710: Mathematical Statistics Lecture 27
Stat 710: Mathematical Statistics Lecture 27 Jun Shao Department of Statistics University of Wisconsin Madison, WI 53706, USA Jun Shao (UW-Madison) Stat 710, Lecture 27 April 3, 2009 1 / 10 Lecture 27:
More informationModeling Real Estate Data using Quantile Regression
Modeling Real Estate Data using Semiparametric Quantile Regression Department of Statistics University of Innsbruck September 9th, 2011 Overview 1 Application: 2 3 4 Hedonic regression data for house prices
More informationStat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.
Stat 535 C - Statistical Computing & Monte Carlo Methods Arnaud Doucet Email: arnaud@cs.ubc.ca 1 Suggested Projects: www.cs.ubc.ca/~arnaud/projects.html First assignement on the web this afternoon: capture/recapture.
More informationReview and continuation from last week Properties of MLEs
Review and continuation from last week Properties of MLEs As we have mentioned, MLEs have a nice intuitive property, and as we have seen, they have a certain equivariance property. We will see later that
More informationFitting Multidimensional Latent Variable Models using an Efficient Laplace Approximation
Fitting Multidimensional Latent Variable Models using an Efficient Laplace Approximation Dimitris Rizopoulos Department of Biostatistics, Erasmus University Medical Center, the Netherlands d.rizopoulos@erasmusmc.nl
More informationStable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence
Stable Limit Laws for Marginal Probabilities from MCMC Streams: Acceleration of Convergence Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham NC 778-5 - Revised April,
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationTitle. Description. Remarks and examples. stata.com. stata.com. methods and formulas for gsem Methods and formulas for gsem
Title stata.com methods and formulas for gsem Methods and formulas for gsem Description Remarks and examples References Also see Description The methods and formulas for the gsem command are presented
More informationLikelihood NIPS July 30, Gaussian Process Regression with Student-t. Likelihood. Jarno Vanhatalo, Pasi Jylanki and Aki Vehtari NIPS-2009
with with July 30, 2010 with 1 2 3 Representation Representation for Distribution Inference for the Augmented Model 4 Approximate Laplacian Approximation Introduction to Laplacian Approximation Laplacian
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models Mixed effects models - Part IV Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
More informationMixed models in R using the lme4 package Part 6: Theory of linear mixed models, evaluating precision of estimates
Mixed models in R using the lme4 package Part 6: Theory of linear mixed models, evaluating precision of estimates Douglas Bates University of Wisconsin - Madison and R Development Core Team
More informationGeneralized, Linear, and Mixed Models
Generalized, Linear, and Mixed Models CHARLES E. McCULLOCH SHAYLER.SEARLE Departments of Statistical Science and Biometrics Cornell University A WILEY-INTERSCIENCE PUBLICATION JOHN WILEY & SONS, INC. New
More informationPh.D. Qualifying Exam Friday Saturday, January 6 7, 2017
Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a
More informationMixed models in R using the lme4 package Part 4: Theory of linear mixed models
Mixed models in R using the lme4 package Part 4: Theory of linear mixed models Douglas Bates Madison January 11, 11 Contents 1 Definition of linear mixed models Definition of linear mixed models As previously
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationSTA 2201/442 Assignment 2
STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution
More informationIntroduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf
1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Yishay Mansour, Lior Wolf 2013-14 We know that X ~ B(n,p), but we do not know p. We get a random sample
More informationLecture 15 Random variables
Lecture 15 Random variables Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn No.1
More informationIntroducing the Normal Distribution
Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 10: Introducing the Normal Distribution Relevant textbook passages: Pitman [5]: Sections 1.2,
More informationOverfitting, Bias / Variance Analysis
Overfitting, Bias / Variance Analysis Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 8, 207 / 40 Outline Administration 2 Review of last lecture 3 Basic
More informationIntroduction to Estimation Methods for Time Series models Lecture 2
Introduction to Estimation Methods for Time Series models Lecture 2 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 2 SNS Pisa 1 / 21 Estimators:
More informationPh.D. Qualifying Exam Monday Tuesday, January 4 5, 2016
Ph.D. Qualifying Exam Monday Tuesday, January 4 5, 2016 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Find the maximum likelihood estimate of θ where θ is a parameter
More informationBayesian inference J. Daunizeau
Bayesian inference J. Daunizeau Brain and Spine Institute, Paris, France Wellcome Trust Centre for Neuroimaging, London, UK Overview of the talk 1 Probabilistic modelling and representation of uncertainty
More informationGeneral Bayesian Inference I
General Bayesian Inference I Outline: Basic concepts, One-parameter models, Noninformative priors. Reading: Chapters 10 and 11 in Kay-I. (Occasional) Simplified Notation. When there is no potential for
More informationLog Concavity and Density Estimation
Log Concavity and Density Estimation Lutz Dümbgen, Kaspar Rufibach, André Hüsler (Universität Bern) Geurt Jongbloed (Amsterdam) Nicolai Bissantz (Göttingen) I. A Consulting Case II. III. IV. Log Concave
More informationBrandon C. Kelly (Harvard Smithsonian Center for Astrophysics)
Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics) Probability quantifies randomness and uncertainty How do I estimate the normalization and logarithmic slope of a X ray continuum, assuming
More informationIntroduction to Bayesian Inference
University of Pennsylvania EABCN Training School May 10, 2016 Bayesian Inference Ingredients of Bayesian Analysis: Likelihood function p(y φ) Prior density p(φ) Marginal data density p(y ) = p(y φ)p(φ)dφ
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters
More informationBayesian Inference for DSGE Models. Lawrence J. Christiano
Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.
More informationComputer Intensive Methods in Mathematical Statistics
Computer Intensive Methods in Mathematical Statistics Department of mathematics johawes@kth.se Lecture 7 Sequential Monte Carlo methods III 7 April 2017 Computer Intensive Methods (1) Plan of today s lecture
More informationIntroduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf
1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 2014-15 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a
More informationRegression. Machine Learning and Pattern Recognition. Chris Williams. School of Informatics, University of Edinburgh.
Regression Machine Learning and Pattern Recognition Chris Williams School of Informatics, University of Edinburgh September 24 (All of the slides in this course have been adapted from previous versions
More informationGeneralized linear models
Generalized linear models Søren Højsgaard Department of Mathematical Sciences Aalborg University, Denmark October 29, 202 Contents Densities for generalized linear models. Mean and variance...............................
More informationF & B Approaches to a simple model
A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys
More informationLogistic Regression. Seungjin Choi
Logistic Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationMATH 680 Fall November 27, Homework 3
MATH 680 Fall 208 November 27, 208 Homework 3 This homework is due on December 9 at :59pm. Provide both pdf, R files. Make an individual R file with proper comments for each sub-problem. Subgradients and
More informationDefault Priors and Effcient Posterior Computation in Bayesian
Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature
More information