Chapter 4 - Fundamentals of spatial processes Lecture notes
|
|
- Blaze Conley
- 5 years ago
- Views:
Transcription
1 Chapter 4 - Fundamentals of spatial processes Lecture notes Geir Storvik January 21, 2013
2 STK Intro 2 Spatial processes Typically correlation between nearby sites Mostly positive correlation Negative correlation when competition Typically part of a space-time process Temporal snapshot Temporal aggregation Statistical analysis Incorporate spatial dependence into spatial statistical models Relatively new in statistics, active research field!
3 STK Intro 3 Hierarchical (statistical) models Data model Process model In time series setting - state space models
4 STK Intro 4 Hierarchical (statistical) models Data model Process model In time series setting - state space models Example Y t =αy t 1 + W t, t = 2, 3,... Z t =βy t + η t, t = 1, 2, 3,... W t ind (0, σ 2 W ) η t ind (0, σ 2 W ) Process model Data model We will do similar type of modelling now, separating the process model and the data model: Z(s i ) = Y (s i ) + ε(s i ), ε(s i ) iid
5 STK Intro 5 Prediction Typical aim: Observed Z = (Z(s 1 ),..., Z(s m )) Want to predict Y (s 0 ) Squared error loss function: Optimal E(Y (s 0 ) Z) Empirical methods: E(Y (s 0) Z) E(Y (s 0) Z, ˆθ) Bayesian methods: E(Y (s 0) Z) = θ E(Y (s0) Z, θ)p(θ Z)dθ Require E(Y (s 0 ) Z, θ) Z Y are independent, but dependent marginally Require model for {Y (s)}
6 Geostatistical models Assume {Y (s)} is a Gaussian process (Y (s 1 ),..., Y (s m )) is multivariate Gaussian for all m and s 1,..., s m Need to specify µ(s) = E(Y (s)) C Y (s, s ) = cov(y (s), Y (s )) Usually assume 2. order stationarity: µ(s) = µ, s C Y (s, s ) = C Y ( s s ), s, s Possible extension: µ(s) = x(s) T β Often: Z(s i ) Y (s i ), σε 2 ind.gau(y (s i ), σε) 2
7 STK Intro 7 Prediction (Z(s 1 ),..., Z(s m ), Y (s 0 )) is multivariate Gaussian Can use rules about conditional distributions: ( ) (( ) ( )) X1 µ1 Σ11 Σ MVN, 12 X 2 µ 2 Σ 21 Σ 22 Need E(X 2 X 1 ) =µ 1 + Σ 12 Σ 1 22 (X 2 µ 2 ) var(x 2 X 1 ) =Σ 11 Σ 12 Σ 1 22 Σ 21 Expectations: As for ordinary linear regression Covariances: New!
8 TK Intro 8 Variogram and covariance function Dependence can be specified through covariance functions Not always that such exist, but prediction can still be carried out More general concept: Variogram Note: 2γ Y (h) var[y (s + h) Y (s)] =var[y (s + h)] + var[y (s)] 2cov[Y (s + h), Y (s)] =2C Y (0) 2C Y (h) Variogram can exist even if var[y (s)] =! Often assumed in spatial statistics, thereby the use of variograms When 2γ Y (h) = 2γ Y ( h ), we get an isotrophic variogram
9 Isotropic covariance functions/variograms Matern covariance function C Y (h; θ) = σ1{2 2 θ2 1 Γ(θ 2 )} 1 { h /θ 1 } θ2 K θ2 ( h /θ 1 ) Powered-exponential C Y (h; θ) = σ1 2 exp{ ( h /θ 1 ) θ2 } Exponential C Y (h; θ) = σ1 2 exp{ ( h /θ 1 )} Gaussian C Y (h; θ) = σ1 2 exp{ ( h /θ 1 ) 2 }
10 TK Intro 10 Bochner s theorem A covariance function needs to be positive definite Theorem (Bochner, 1955) If C Y (h) dh <, then a valid real-valued covariance function can be written as C Y (h) = cos(ω T h)f Y (ω)dω where f Y (ω) 0 is symmetric about ω = 0. f Y (ω): Spectral density of C Y (h).
11 Nugget effect and Sill We have C Z (h) =cov[z(s), Z(s + h)] =cov[y (s) + ε(s), Y (s + h) + ε(s + h)] =cov[y (s), Y (s + h)] + cov[ε(s), ε(s + h)] =C Y (h) + σεi 2 (h = 0) Assume Then C Y (0) =σy 2 lim [C Y (0) C Y (h)] =0 h 0 C Z (0) = σy 2 + σε 2 lim [C Z (0) C Z (h)] = σε 2 = c 0 h 0 Possible to include nugget effect also in Y -process. Sill Nugget effect TK Intro 11
12 Nugget/sill STK Intro 12
13 STK Intro 13 Estimation of variogram/covariance function 2γ Z (h) var[z(s + h) Z(s)] Const expectation = E[Z(s + h) Z(s)] 2 Can estimate from all pairs having distance h between. Problem: Few/no pairs for all h Simplifications γ Z (h) = γ 0 Z ( h ) Use 2 γ 0 Z (h) = ave{(z(s i) Z(s j )) 2 ; s i s j T (h)} If covariates, use residuals
14 Boreality data Empirical variogram: Boreality data
15 STK Intro 15 Testing for independence If independence: γ 0 Z (h) = σ2 Z Test-statistic F = γ Z (h 1 )/ σ 2 Z, h 1 smallest observed distance Reject H 0 for F 1 large Permutation test: Recalculate F for all permutations of Z If observed F is above 97.5% percentile, reject H 0 Boreality example: P-value =
16 Kriging = Prediction Model Y =Xβ + δ Z =Y + ε Prediction of Y (s 0 ) Linear predictors {L T Z + k} Optimal predictor minimize MSPE(L, k) E[Y (s 0 ) L T Z k] 2 =var[y (s 0 ) L T Z k] + {E[Y (s 0 ) L T Z k]} 2 Note: Do not assume any distributional assumptions
17 Kriging MSPE(L, k) =var[y (s 0 ) L T Z k] + {E[Y (s 0 ) L T Z k]} 2 =var[y (s 0 ) L T Z k] + {µ Y (s 0 ) L T µ z k]} 2 Second term is zero if k = µ Y (s 0 ) L T µ Z. First term (c(s 0 ) = cov[z, Y (s 0 )]) var[y (s 0 ) L T Z k] =C Y (s 0, s 0 ) 2L T c(s 0 ) + L T C Z L Derivative wrt L T : giving 2c(s 0 ) + 2C Z L = 0 L = C 1 Z c(s 0) Y (s 0 ) =µ Y (s 0 ) + c(s 0 ) T C 1 Z [Z µ Z ] MSPE(L, k ) =C Y (s 0, s 0 ) c(s 0 ) T C 1 Z c(s 0)
18 Kriging c(s 0 ) =cov[z, Y (s 0 )] =cov[y, Y (s 0 )] =(C Y (s 0, s 1 ),..., C Y (s 0, s m )) =c Y (s 0 ) C Z ={C Z (s i, s j )} { C Y (s i, s i ) + σ 2 C Z (s i, s j ) = ε, C Y (s i, s j ), s i = s j s i s j Y (s 0 ) = µ Y (s 0 ) + c Y (s 0 ) T C 1 Z (Z µ Z )
19 Gaussian assumptions Assume now Y, Z are MVN ( ) (( Z µz = MVN Y(s 0 ) µ Y (s 0 ) Give ) ( )) CZ c, Y (s 0 ) T c Y (s 0 ) C Y (s 0, s 0 ) Y (s 0 ) Z N(µ Y (s 0 ) + c(s 0 ) T C 1 Z [Z µ Z ], C Y (s 0, s 0 ) c(s 0 ) T C 1 Z c(s 0)) Same as kriging! The book derive this directly without using the formula for conditional distribution
20 Kriging (cont) Assuming Then and Y (s 0 ) = µ Y (s 0 ) + c Y (s 0 ) T C 1 Z (Z µ Z ) E[Y (s)] =x(s) T β Z(s i ) Y (s i ), σ 2 ε ind.gau(y (s i ), σ 2 ε) µ Z =µ Y = Xβ C Z =Σ Y + σ 2 εi c Y (s 0 ) =(C Y (s 0, s 1 ),..., C Y (s 0, s m )) T Y (s 0 ) = x(s 0 )β + c Y (s 0 ) T [Σ Y + σ 2 εi] 1 (Z Xβ)
21 STK Intro 21 Summary previous time/plan Simple kriging Linear predictor Assume parameters known Equal to conditional expectation
22 STK Intro 22 Summary previous time/plan Simple kriging Linear predictor Assume parameters known Equal to conditional expectation Unknown parameters Ordinary kriging Plug-in estimates Bayesian approach Non-Gaussian models
23 STK Intro 23 Unknown parameters So far assumed parameters known, what if unknown? Plug-in estimate/empirical Bayes Bayesian approach Direct approach - Ordinary kriging
24 STK Intro 24 Estimation Likelihood L(θ) = p(z; θ) = p(z y; θ)p(y; θ)dθ Gaussian processes: Can be derived analytically Optimization can still be problematic Many (too many?) routines in R available y
25 Estimation Likelihood L(θ) = p(z; θ) = p(z y; θ)p(y; θ)dθ Gaussian processes: Can be derived analytically Optimization can still be problematic y Many (too many?) routines in R available > gls(bor~wet,correlation=corgaus(form=~x+y,nugget=true),data=boreality) Generalized least squares fit by REML Model: Bor ~ Wet Data: Boreality Log-restricted-likelihood: Coefficients: (Intercept) Wet Correlation Structure: Gaussian spatial correlation Formula: ~x + y Parameter estimate(s): range nugget Degrees of freedom: 533 total; 531 residual Residual standard error:
26 Bayesian approach Hierarchical model Variable Densities Notation in book Data model: Z p(z Y, θ) [Z Y, θ] Process model: Y p(y θ) [Y θ] Parameter: θ
27 Bayesian approach Hierarchical model Variable Densities Notation in book Data model: Z p(z Y, θ) [Z Y, θ] Process model: Y p(y θ) [Y θ] Parameter: θ Simultaneous model: p(y, z θ) = p(z y, θ)p(y θ) Marginal model: L(θ) = p(z θ) = y p(z, y θ)dy Inference: θ = argmax θ L(θ)
28 Bayesian approach Hierarchical model Variable Densities Notation in book Data model: Z p(z Y, θ) [Z Y, θ] Process model: Y p(y θ) [Y θ] Parameter: θ Simultaneous model: p(y, z θ) = p(z y, θ)p(y θ) Marginal model: L(θ) = p(z θ) = p(z, y θ)dy y Inference: θ = argmax θ L(θ) Bayesian approach: Include model on θ Variable Densities Notation in book Data model: Z p(z Y, θ) [Z Y, θ] Process model: Y p(y θ) [Y θ] Parameter model: θ p(θ) [θ] Simultaneous model: p(y, z, θ) Marginal model: p(z) = θ p(z, y θ)dydθ y Inference: θ = θ θp(θ z)dθ
29 Bayesian approach - example Assume z 1,..., z m are iid, z i = µ + ε i, ε i N(0, σ 2 ) σ 2 known, interest in estimating µ. ML: ˆµ ML = z, Var[ z] = σ 2 /m
30 Bayesian approach - example Assume z 1,..., z m are iid, z i = µ + ε i, ε i N(0, σ 2 ) σ 2 known, interest in estimating µ. ML: ˆµ ML = z, Var[ z] = σ 2 /m Bayesian approach: Assume µ N(µ 0, kσ 2 ) Can show that E[µ z] = k 1 µ 0 + m z k 1 + m σ2 Var[µ z] = k 1 + m k z k σ 2 m
31 TK Intro 31 Prediction - example Predict new point z 0. Plug-in-rule: ẑ 0 = z, var[ẑ 0 ] = σ 2
32 TK Intro 32 Prediction - example Predict new point z 0. Plug-in-rule: ẑ 0 = z, var[ẑ 0 ] = σ 2 Bayesian approach: E[z 0 z] =E[E[z 0 µ, z]] = E[µ z] = k 1 µ 0 + m z k 1 + m var[z 0 z] =E[var[z 0 µ, z]] + var[e[z 0 µ, z]] =σ 2 + var[µ z] =σ 2 + σ 2 k 1 + m k k z σ 2 + σ2 m Taking uncertainty in µ into account.
33 INLA INLA = Integrated nested Laplace approximation (software) C-code, R-interface Can be installed by source(" Both empirical Bayes and Bayes
34 Example - simulated data Assume Y (s) =β 0 + β 1 x(s) + σ δ δ(s), Z(s i ) =Y (s i ) + σ ε ε(s i ), {δ(s)} matern correlation {ε(s i )} independent Simulations: Simulation on grid {x(s)} sinus curve in horizontal direction Parametervalues (β 0, β 1 ) = (2, 0.3), σ δ = 1, σ ε = 0.3 and (θ 1, θ 2 ) = (2, 3) Show images Show code/results
35 STK Intro 35 Matern model - parametrization Model Range par Smoothness par Book { h /θ 1 } θ2 K θ2 ( h /θ 1 ) θ 1 θ 2 geor { h /φ} κ K κ ( h /φ) φ κ INLA { h κ} ν K ν ( h κ) 1/κ ν Note: INLA reports an estimate of another range, defined as 8 r = κ corresponding to a distance where the covariance function is approximately zero. An estimate of the range parameter can then be obtained by 1 κ = r 8 Note: INLA works with precisions τ y = σy 2, τ z = σz 2.
36 STK Intro 36 Output INLA Empirical Bayes Fixed effects: mean sd 0.025quant 0.5quant 0.975quant kld (Intercept) x Random effects: Name Model Max KLD delta Matern2D model Model hyperparameters: mean sd 0.025quant 0.5quant Precision for the Gaussian observations Precision for delta Range for delta quant Precision for the Gaussian observations Precision for delta Range for delta Marginal Likelihood: Warning: Interpret the marginal likelihood with care if the prior model is improper. Posterior marginals for linear predictor and fitted values computed
37 STK Intro 37 Result - INLA - empirical Bayes Parameter True value Estimate β β σ z = σ y = φ = 2.113
38 STK Intro 38 Ordinary kriging E[Y (s)] = µ, µ unknown, obs Z = (Z 1,..., Z m ) MSPE(λ) = E[Y (s 0 ) λ T Z) 2 Constraint: E[λ T Z] = E[Y (s 0 )] = µ E[λ T Z] = λ T E[Z] = λ T 1µ λ T 1 = 1 Lagrange multiplier: Minimize MSPE(λ) 2κ(λ T 1 1) MSPE(λ) = C Y (s 0, s 0 ) 2λ T c Y (s 0 ) + λ T C Z λ Differentiation wrt λ T : 2c Y (s 0 ) + 2C Z λ =2κ1 λ T 1 =1 λ =C 1 Z [c Y (s 0 ) + κ 1] κ = 1 c Y (s 0 ) T C 1 Z 1 1 T C 1 Z 1
39 Kriging Simple kriging, EY (s) = x(s) T β, known Y (s 0 ) = x(s 0 )β + c Y (s 0 ) T C 1 (Z Xβ) Z
40 Kriging Simple kriging, EY (s) = x(s) T β, known Y (s 0 ) = x(s 0 )β + c Y (s 0 ) T C 1 (Z Xβ) Ordinary kriging, EY (s) = µ, unknown Z Ŷ (s 0 ) ={c Y (s 0 ) + 1(1 1T C Z 1c y (s 0 )) 1 T C 1 Z 1 } T C 1 Z Z = µ gls + c Y (s 0 ) T C 1 Z (Z 1 µ gls) µ gls =[1 T C 1 Z 1] 1 1 T C 1 z Z
41 Kriging Simple kriging, EY (s) = x(s) T β, known Y (s 0 ) = x(s 0 )β + c Y (s 0 ) T C 1 (Z Xβ) Ordinary kriging, EY (s) = µ, unknown Z Ŷ (s 0 ) ={c Y (s 0 ) + 1(1 1T C Z 1c y (s 0 )) 1 T C 1 Z 1 } T C 1 Z Z = µ gls + c Y (s 0 ) T C 1 Z (Z 1 µ gls) µ gls =[1 T C 1 Z 1] 1 1 T C 1 z Z Universal kriging, EY (s) = x(s) T β, unknown Ŷ (s 0 ) =x(s 0 ) T βgls + c Y (s 0 ) T C 1 Z (Z X β gls ) β gls =[X T C 1 Z X] 1 X T C 1 z Z
42 TK Intro 42 Non-Gaussian data Counts, binary data: Gaussian assumption inappropriate Can still have Gaussian assumption on latent process, but non-gaussian data-distribution Y (s) =x(s) T β + ε(s), {ε(s)} Gaussian process Z(s i ) Y (s i ), θ 1 ind.f(y (s i ), θ 1 ) Best linear predictor still possible, but not reasonable (?) Conditional expectation E[Y (s 0 ) Z] still optimal under square loss Not easy to compute anymore Exponential-family model (EFM) f (z) = exp{(zη b(η))/a(θ 1 ) + c(z, θ 1 )} η =x T β Include Binomial, Poisson, Gaussian, Gamma
43 STK Intro 43 Estimation Likelihood L(θ) = p(z; θ) = p(z y; θ)p(y; θ)dθ Gaussian processes: Can be derived analytically Non-Gaussian: Can be problematic Monte Carlo Laplace approximations y
44 STK Intro 44 Computation of conditional expectation E[Y (s 0 ) Z] = E[E[Y (s 0 ) Y, Z]] Monte Carlo: E[E[Y (s 0 ) Y, Z]] 1 M M E[Y (s 0 ) Y m, Z] m=1 where Y m [Y Z] (MCMC) Laplace approximation: Approximate [Y Z] by a Gaussian Computer software available for both approahces. Also give uncertainty measures var[y (s 0 ) Z].
45 STK Intro 45 Laplace approximation L(θ) =p(z; θ) = p(z y; θ)p(y; θ)dy = exp{f (y; θ)}dy y y where f (y; θ) = log p(z y; θ) + log p(y; θ) Taylor approximation, y 0 max-point of f (y; θ) (depending on θ) f (y) f (y 0 ) 1 2 (y y 0) T H(y y 0 ), H = 2 y y T f (y) y=y 0 L(θ) exp{f (y 0 ) 1 2 (y y 0) T H(y y 0 )}dy y = exp{f (y 0 )}(2π) m/2 H 1/2
46 Example - simulated data Assume Y (s) =β 0 + β 1 x(s) + σ δ δ(s), Z(s i ) Poisson(exp(Y (s i )) {δ(s)} matern correlation Simulations: Simulation on grid {x(s)} sinus curve in horizontal direction Parametervalues (β 0, β 1 ) = (2, 0.3), σ δ = 1, σ ε = 0.3 and (θ 1, θ 2 ) = (2, 3) Show images Show code/results
Chapter 4 - Fundamentals of spatial processes Lecture notes
TK4150 - Intro 1 Chapter 4 - Fundamentals of spatial processes Lecture notes Odd Kolbjørnsen and Geir Storvik January 30, 2017 STK4150 - Intro 2 Spatial processes Typically correlation between nearby sites
More informationChapter 4 - Spatial processes R packages and software Lecture notes
TK4150 - Intro 1 Chapter 4 - Spatial processes R packages and software Lecture notes Odd Kolbjørnsen and Geir Storvik February 13, 2017 STK4150 - Intro 2 Last time General set up for Kriging type problems
More informationSummary STK 4150/9150
STK4150 - Intro 1 Summary STK 4150/9150 Odd Kolbjørnsen May 22 2017 Scope You are expected to know and be able to use basic concepts introduced in the book. You knowledge is expected to be larger than
More informationModels for spatial data (cont d) Types of spatial data. Types of spatial data (cont d) Hierarchical models for spatial data
Hierarchical models for spatial data Based on the book by Banerjee, Carlin and Gelfand Hierarchical Modeling and Analysis for Spatial Data, 2004. We focus on Chapters 1, 2 and 5. Geo-referenced data arise
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationChapter 3 - Temporal processes
STK4150 - Intro 1 Chapter 3 - Temporal processes Odd Kolbjørnsen and Geir Storvik January 23 2017 STK4150 - Intro 2 Temporal processes Data collected over time Past, present, future, change Temporal aspect
More informationBasics of Point-Referenced Data Models
Basics of Point-Referenced Data Models Basic tool is a spatial process, {Y (s), s D}, where D R r Chapter 2: Basics of Point-Referenced Data Models p. 1/45 Basics of Point-Referenced Data Models Basic
More informationLecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu
Lecture: Gaussian Process Regression STAT 6474 Instructor: Hongxiao Zhu Motivation Reference: Marc Deisenroth s tutorial on Robot Learning. 2 Fast Learning for Autonomous Robots with Gaussian Processes
More informationAsymptotic Multivariate Kriging Using Estimated Parameters with Bayesian Prediction Methods for Non-linear Predictands
Asymptotic Multivariate Kriging Using Estimated Parameters with Bayesian Prediction Methods for Non-linear Predictands Elizabeth C. Mannshardt-Shamseldin Advisor: Richard L. Smith Duke University Department
More informationDisease mapping with Gaussian processes
EUROHEIS2 Kuopio, Finland 17-18 August 2010 Aki Vehtari (former Helsinki University of Technology) Department of Biomedical Engineering and Computational Science (BECS) Acknowledgments Researchers - Jarno
More informationBayesian Learning. HT2015: SC4 Statistical Data Mining and Machine Learning. Maximum Likelihood Principle. The Bayesian Learning Framework
HT5: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Maximum Likelihood Principle A generative model for
More informationHierarchical Modeling for Univariate Spatial Data
Hierarchical Modeling for Univariate Spatial Data Geography 890, Hierarchical Bayesian Models for Environmental Spatial Data Analysis February 15, 2011 1 Spatial Domain 2 Geography 890 Spatial Domain This
More informationPracticum : Spatial Regression
: Alexandra M. Schmidt Instituto de Matemática UFRJ - www.dme.ufrj.br/ alex 2014 Búzios, RJ, www.dme.ufrj.br Exploratory (Spatial) Data Analysis 1. Non-spatial summaries Numerical summaries: Mean, median,
More informationHandbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp
Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp Marcela Alfaro Córdoba August 25, 2016 NCSU Department of Statistics Continuous Parameter
More informationBayesian Transformed Gaussian Random Field: A Review
Bayesian Transformed Gaussian Random Field: A Review Benjamin Kedem Department of Mathematics & ISR University of Maryland College Park, MD (Victor De Oliveira, David Bindel, Boris and Sandra Kozintsev)
More informationPoint-Referenced Data Models
Point-Referenced Data Models Jamie Monogan University of Georgia Spring 2013 Jamie Monogan (UGA) Point-Referenced Data Models Spring 2013 1 / 19 Objectives By the end of these meetings, participants should
More informationSpatial Statistics with Image Analysis. Lecture L02. Computer exercise 0 Daily Temperature. Lecture 2. Johan Lindström.
C Stochastic fields Covariance Spatial Statistics with Image Analysis Lecture 2 Johan Lindström November 4, 26 Lecture L2 Johan Lindström - johanl@maths.lth.se FMSN2/MASM2 L /2 C Stochastic fields Covariance
More informationComparing Non-informative Priors for Estimation and Prediction in Spatial Models
Environmentrics 00, 1 12 DOI: 10.1002/env.XXXX Comparing Non-informative Priors for Estimation and Prediction in Spatial Models Regina Wu a and Cari G. Kaufman a Summary: Fitting a Bayesian model to spatial
More informationStatistícal Methods for Spatial Data Analysis
Texts in Statistícal Science Statistícal Methods for Spatial Data Analysis V- Oliver Schabenberger Carol A. Gotway PCT CHAPMAN & K Contents Preface xv 1 Introduction 1 1.1 The Need for Spatial Analysis
More informationHierarchical Modelling for Univariate Spatial Data
Hierarchical Modelling for Univariate Spatial Data Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department
More informationBayesian Inference: Concept and Practice
Inference: Concept and Practice fundamentals Johan A. Elkink School of Politics & International Relations University College Dublin 5 June 2017 1 2 3 Bayes theorem In order to estimate the parameters of
More informationStatistics & Data Sciences: First Year Prelim Exam May 2018
Statistics & Data Sciences: First Year Prelim Exam May 2018 Instructions: 1. Do not turn this page until instructed to do so. 2. Start each new question on a new sheet of paper. 3. This is a closed book
More informationNonstationary spatial process modeling Part II Paul D. Sampson --- Catherine Calder Univ of Washington --- Ohio State University
Nonstationary spatial process modeling Part II Paul D. Sampson --- Catherine Calder Univ of Washington --- Ohio State University this presentation derived from that presented at the Pan-American Advanced
More informationA short introduction to INLA and R-INLA
A short introduction to INLA and R-INLA Integrated Nested Laplace Approximation Thomas Opitz, BioSP, INRA Avignon Workshop: Theory and practice of INLA and SPDE November 7, 2018 2/21 Plan for this talk
More informationPractical Bayesian Optimization of Machine Learning. Learning Algorithms
Practical Bayesian Optimization of Machine Learning Algorithms CS 294 University of California, Berkeley Tuesday, April 20, 2016 Motivation Machine Learning Algorithms (MLA s) have hyperparameters that
More informationPattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite
More informationIntroduction to Geostatistics
Introduction to Geostatistics Abhi Datta 1, Sudipto Banerjee 2 and Andrew O. Finley 3 July 31, 2017 1 Department of Biostatistics, Bloomberg School of Public Health, Johns Hopkins University, Baltimore,
More informationBAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA
BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci
More informationCBMS Lecture 1. Alan E. Gelfand Duke University
CBMS Lecture 1 Alan E. Gelfand Duke University Introduction to spatial data and models Researchers in diverse areas such as climatology, ecology, environmental exposure, public health, and real estate
More informationESTIMATING THE MEAN LEVEL OF FINE PARTICULATE MATTER: AN APPLICATION OF SPATIAL STATISTICS
ESTIMATING THE MEAN LEVEL OF FINE PARTICULATE MATTER: AN APPLICATION OF SPATIAL STATISTICS Richard L. Smith Department of Statistics and Operations Research University of North Carolina Chapel Hill, N.C.,
More informationIntroduction to Bayes and non-bayes spatial statistics
Introduction to Bayes and non-bayes spatial statistics Gabriel Huerta Department of Mathematics and Statistics University of New Mexico http://www.stat.unm.edu/ ghuerta/georcode.txt General Concepts Spatial
More informationBayesian Linear Regression
Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective
More informationBeyond MCMC in fitting complex Bayesian models: The INLA method
Beyond MCMC in fitting complex Bayesian models: The INLA method Valeska Andreozzi Centre of Statistics and Applications of Lisbon University (valeska.andreozzi at fc.ul.pt) European Congress of Epidemiology
More informationGauge Plots. Gauge Plots JAPANESE BEETLE DATA MAXIMUM LIKELIHOOD FOR SPATIALLY CORRELATED DISCRETE DATA JAPANESE BEETLE DATA
JAPANESE BEETLE DATA 6 MAXIMUM LIKELIHOOD FOR SPATIALLY CORRELATED DISCRETE DATA Gauge Plots TuscaroraLisa Central Madsen Fairways, 996 January 9, 7 Grubs Adult Activity Grub Counts 6 8 Organic Matter
More informationOne-parameter models
One-parameter models Patrick Breheny January 22 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/17 Introduction Binomial data is not the only example in which Bayesian solutions can be worked
More informationGAUSSIAN PROCESS REGRESSION
GAUSSIAN PROCESS REGRESSION CSE 515T Spring 2015 1. BACKGROUND The kernel trick again... The Kernel Trick Consider again the linear regression model: y(x) = φ(x) w + ε, with prior p(w) = N (w; 0, Σ). The
More informationSpatial inference. Spatial inference. Accounting for spatial correlation. Multivariate normal distributions
Spatial inference I will start with a simple model, using species diversity data Strong spatial dependence, Î = 0.79 what is the mean diversity? How precise is our estimate? Sampling discussion: The 64
More informationBayesian linear regression
Bayesian linear regression Linear regression is the basis of most statistical modeling. The model is Y i = X T i β + ε i, where Y i is the continuous response X i = (X i1,..., X ip ) T is the corresponding
More informationExercises for STK4150/ Environmental and spatial statistics
Exercises for STK4150/9150 - Environmental and spatial statistics Geir Storvik Spring 2012 Exercise 1 (AR(1) models) Consider the AR(1) process x 1 =µ 1 + σ 1 ε 1 x i =µ + a(x i 1 µ) + σε i, i = 2, 3,...
More informationFoundations of Statistical Inference
Foundations of Statistical Inference Julien Berestycki Department of Statistics University of Oxford MT 2016 Julien Berestycki (University of Oxford) SB2a MT 2016 1 / 32 Lecture 14 : Variational Bayes
More informationBayesian Thinking in Spatial Statistics
Bayesian Thinking in Spatial Statistics Lance A. Waller Department of Biostatistics Rollins School of Public Health Emory University 1518 Clifton Road NE Atlanta, GA 30322 E-mail: lwaller@sph.emory.edu
More informationHypothesis Testing. Econ 690. Purdue University. Justin L. Tobias (Purdue) Testing 1 / 33
Hypothesis Testing Econ 690 Purdue University Justin L. Tobias (Purdue) Testing 1 / 33 Outline 1 Basic Testing Framework 2 Testing with HPD intervals 3 Example 4 Savage Dickey Density Ratio 5 Bartlett
More informationNon-gaussian spatiotemporal modeling
Dec, 2008 1/ 37 Non-gaussian spatiotemporal modeling Thais C O da Fonseca Joint work with Prof Mark F J Steel Department of Statistics University of Warwick Dec, 2008 Dec, 2008 2/ 37 1 Introduction Motivation
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationAn Introduction to Bayesian Linear Regression
An Introduction to Bayesian Linear Regression APPM 5720: Bayesian Computation Fall 2018 A SIMPLE LINEAR MODEL Suppose that we observe explanatory variables x 1, x 2,..., x n and dependent variables y 1,
More informationSpatial modelling using INLA
Spatial modelling using INLA August 30, 2012 Bayesian Research and Analysis Group Queensland University of Technology Log Gaussian Cox process Hierarchical Poisson process λ(s) = exp(z(s)) for Z(s) a Gaussian
More informationGaussian predictive process models for large spatial data sets.
Gaussian predictive process models for large spatial data sets. Sudipto Banerjee, Alan E. Gelfand, Andrew O. Finley, and Huiyan Sang Presenters: Halley Brantley and Chris Krut September 28, 2015 Overview
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationLecture 2: Priors and Conjugacy
Lecture 2: Priors and Conjugacy Melih Kandemir melih.kandemir@iwr.uni-heidelberg.de May 6, 2014 Some nice courses Fred A. Hamprecht (Heidelberg U.) https://www.youtube.com/watch?v=j66rrnzzkow Michael I.
More informationBayesian Linear Models
Bayesian Linear Models Sudipto Banerjee September 03 05, 2017 Department of Biostatistics, Fielding School of Public Health, University of California, Los Angeles Linear Regression Linear regression is,
More informationg-priors for Linear Regression
Stat60: Bayesian Modeling and Inference Lecture Date: March 15, 010 g-priors for Linear Regression Lecturer: Michael I. Jordan Scribe: Andrew H. Chan 1 Linear regression and g-priors In the last lecture,
More informationMCMC algorithms for fitting Bayesian models
MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models
More informationCOS513 LECTURE 8 STATISTICAL CONCEPTS
COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions
More informationWhat s for today. Continue to discuss about nonstationary models Moving windows Convolution model Weighted stationary model
What s for today Continue to discuss about nonstationary models Moving windows Convolution model Weighted stationary model c Mikyoung Jun (Texas A&M) Stat647 Lecture 11 October 2, 2012 1 / 23 Nonstationary
More informationIntroduction to Spatial Data and Models
Introduction to Spatial Data and Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry
More informationSTA 2201/442 Assignment 2
STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution
More informationGeneralized Linear Models Introduction
Generalized Linear Models Introduction Statistics 135 Autumn 2005 Copyright c 2005 by Mark E. Irwin Generalized Linear Models For many problems, standard linear regression approaches don t work. Sometimes,
More informationSTA414/2104 Statistical Methods for Machine Learning II
STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements
More informationThe Bayesian approach to inverse problems
The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology ymarz@mit.edu, http://uqgroup.mit.edu
More informationOverview of Spatial Statistics with Applications to fmri
with Applications to fmri School of Mathematics & Statistics Newcastle University April 8 th, 2016 Outline Why spatial statistics? Basic results Nonstationary models Inference for large data sets An example
More informationSpatial statistics, addition to Part I. Parameter estimation and kriging for Gaussian random fields
Spatial statistics, addition to Part I. Parameter estimation and kriging for Gaussian random fields 1 Introduction Jo Eidsvik Department of Mathematical Sciences, NTNU, Norway. (joeid@math.ntnu.no) February
More informationLecture 13 Fundamentals of Bayesian Inference
Lecture 13 Fundamentals of Bayesian Inference Dennis Sun Stats 253 August 11, 2014 Outline of Lecture 1 Bayesian Models 2 Modeling Correlations Using Bayes 3 The Universal Algorithm 4 BUGS 5 Wrapping Up
More informationDavid Giles Bayesian Econometrics
David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian
More informationIntroduction to Spatial Data and Models
Introduction to Spatial Data and Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics,
More informationPMR Learning as Inference
Outline PMR Learning as Inference Probabilistic Modelling and Reasoning Amos Storkey Modelling 2 The Exponential Family 3 Bayesian Sets School of Informatics, University of Edinburgh Amos Storkey PMR Learning
More informationModeling Real Estate Data using Quantile Regression
Modeling Real Estate Data using Semiparametric Quantile Regression Department of Statistics University of Innsbruck September 9th, 2011 Overview 1 Application: 2 3 4 Hedonic regression data for house prices
More informationEstimating the marginal likelihood with Integrated nested Laplace approximation (INLA)
Estimating the marginal likelihood with Integrated nested Laplace approximation (INLA) arxiv:1611.01450v1 [stat.co] 4 Nov 2016 Aliaksandr Hubin Department of Mathematics, University of Oslo and Geir Storvik
More informationσ(a) = a N (x; 0, 1 2 ) dx. σ(a) = Φ(a) =
Until now we have always worked with likelihoods and prior distributions that were conjugate to each other, allowing the computation of the posterior distribution to be done in closed form. Unfortunately,
More informationSTAT 518 Intro Student Presentation
STAT 518 Intro Student Presentation Wen Wei Loh April 11, 2013 Title of paper Radford M. Neal [1999] Bayesian Statistics, 6: 475-501, 1999 What the paper is about Regression and Classification Flexible
More informationModeling and Interpolation of Non-Gaussian Spatial Data: A Comparative Study
Modeling and Interpolation of Non-Gaussian Spatial Data: A Comparative Study Gunter Spöck, Hannes Kazianka, Jürgen Pilz Department of Statistics, University of Klagenfurt, Austria hannes.kazianka@uni-klu.ac.at
More informationDefault priors and model parametrization
1 / 16 Default priors and model parametrization Nancy Reid O-Bayes09, June 6, 2009 Don Fraser, Elisabeta Marras, Grace Yun-Yi 2 / 16 Well-calibrated priors model f (y; θ), F(y; θ); log-likelihood l(θ)
More informationCovariance function estimation in Gaussian process regression
Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian
More informationIntegrated Non-Factorized Variational Inference
Integrated Non-Factorized Variational Inference Shaobo Han, Xuejun Liao and Lawrence Carin Duke University February 27, 2014 S. Han et al. Integrated Non-Factorized Variational Inference February 27, 2014
More informationConjugate Analysis for the Linear Model
Conjugate Analysis for the Linear Model If we have good prior knowledge that can help us specify priors for β and σ 2, we can use conjugate priors. Following the procedure in Christensen, Johnson, Branscum,
More informationHierarchical Modelling for Univariate and Multivariate Spatial Data
Hierarchical Modelling for Univariate and Multivariate Spatial Data p. 1/4 Hierarchical Modelling for Univariate and Multivariate Spatial Data Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota
More informationSpatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016
Spatial Statistics Spatial Examples More Spatial Statistics with Image Analysis Johan Lindström 1 1 Mathematical Statistics Centre for Mathematical Sciences Lund University Lund October 6, 2016 Johan Lindström
More informationHierarchical Modelling for Univariate Spatial Data
Spatial omain Hierarchical Modelling for Univariate Spatial ata Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A.
More informationModelling geoadditive survival data
Modelling geoadditive survival data Thomas Kneib & Ludwig Fahrmeir Department of Statistics, Ludwig-Maximilians-University Munich 1. Leukemia survival data 2. Structured hazard regression 3. Mixed model
More informationEcon 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines
Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the
More informationIntroduction to Probabilistic Machine Learning
Introduction to Probabilistic Machine Learning Piyush Rai Dept. of CSE, IIT Kanpur (Mini-course 1) Nov 03, 2015 Piyush Rai (IIT Kanpur) Introduction to Probabilistic Machine Learning 1 Machine Learning
More informationIndex. Geostatistics for Environmental Scientists, 2nd Edition R. Webster and M. A. Oliver 2007 John Wiley & Sons, Ltd. ISBN:
Index Akaike information criterion (AIC) 105, 290 analysis of variance 35, 44, 127 132 angular transformation 22 anisotropy 59, 99 affine or geometric 59, 100 101 anisotropy ratio 101 exploring and displaying
More informationAn Introduction to Spatial Statistics. Chunfeng Huang Department of Statistics, Indiana University
An Introduction to Spatial Statistics Chunfeng Huang Department of Statistics, Indiana University Microwave Sounding Unit (MSU) Anomalies (Monthly): 1979-2006. Iron Ore (Cressie, 1986) Raw percent data
More informationMCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17
MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making
More informationThe Poisson transform for unnormalised statistical models. Nicolas Chopin (ENSAE) joint work with Simon Barthelmé (CNRS, Gipsa-LAB)
The Poisson transform for unnormalised statistical models Nicolas Chopin (ENSAE) joint work with Simon Barthelmé (CNRS, Gipsa-LAB) Part I Unnormalised statistical models Unnormalised statistical models
More informationBAYESIAN KRIGING AND BAYESIAN NETWORK DESIGN
BAYESIAN KRIGING AND BAYESIAN NETWORK DESIGN Richard L. Smith Department of Statistics and Operations Research University of North Carolina Chapel Hill, N.C., U.S.A. J. Stuart Hunter Lecture TIES 2004
More informationYaming Yu Department of Statistics, University of California, Irvine Xiao-Li Meng Department of Statistics, Harvard University.
Appendices to To Center or Not to Center: That is Not the Question An Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Efficiency Yaming Yu Department of Statistics, University of
More informationGaussian Processes 1. Schedule
1 Schedule 17 Jan: Gaussian processes (Jo Eidsvik) 24 Jan: Hands-on project on Gaussian processes (Team effort, work in groups) 31 Jan: Latent Gaussian models and INLA (Jo Eidsvik) 7 Feb: Hands-on project
More informationUsing Bayes Factors for Model Selection in High-Energy Astrophysics
Using Bayes Factors for Model Selection in High-Energy Astrophysics Shandong Zhao Department of Statistic, UCI April, 203 Model Comparison in Astrophysics Nested models (line detection in spectral analysis):
More information1 Mixed effect models and longitudinal data analysis
1 Mixed effect models and longitudinal data analysis Mixed effects models provide a flexible approach to any situation where data have a grouping structure which introduces some kind of correlation between
More informationAnalysing geoadditive regression data: a mixed model approach
Analysing geoadditive regression data: a mixed model approach Institut für Statistik, Ludwig-Maximilians-Universität München Joint work with Ludwig Fahrmeir & Stefan Lang 25.11.2005 Spatio-temporal regression
More informationBig Data in Environmental Research
Big Data in Environmental Research Gavin Shaddick University of Bath 7 th July 2016 1/ 133 OUTLINE 10:00-11:00 Introduction to Spatio-Temporal Modelling 14:30-15:30 Bayesian Inference 17:00-18:00 Examples
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing
More informationEfficient Likelihood-Free Inference
Efficient Likelihood-Free Inference Michael Gutmann http://homepages.inf.ed.ac.uk/mgutmann Institute for Adaptive and Neural Computation School of Informatics, University of Edinburgh 8th November 2017
More informationPrediction. Prediction MIT Dr. Kempthorne. Spring MIT Prediction
MIT 18.655 Dr. Kempthorne Spring 2016 1 Problems Targets of Change in value of portfolio over fixed holding period. Long-term interest rate in 3 months Survival time of patients being treated for cancer
More informationML estimation: Random-intercepts logistic model. and z
ML estimation: Random-intercepts logistic model log p ij 1 p = x ijβ + υ i with υ i N(0, συ) 2 ij Standardizing the random effect, θ i = υ i /σ υ, yields log p ij 1 p = x ij β + σ υθ i with θ i N(0, 1)
More informationLecture 9 STK3100/4100
Lecture 9 STK3100/4100 27. October 2014 Plan for lecture: 1. Linear mixed models cont. Models accounting for time dependencies (Ch. 6.1) 2. Generalized linear mixed models (GLMM, Ch. 13.1-13.3) Examples
More informationGeostatistical Modeling for Large Data Sets: Low-rank methods
Geostatistical Modeling for Large Data Sets: Low-rank methods Whitney Huang, Kelly-Ann Dixon Hamil, and Zizhuang Wu Department of Statistics Purdue University February 22, 2016 Outline Motivation Low-rank
More informationSimple Linear Regression
Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.
More informationFitting Multidimensional Latent Variable Models using an Efficient Laplace Approximation
Fitting Multidimensional Latent Variable Models using an Efficient Laplace Approximation Dimitris Rizopoulos Department of Biostatistics, Erasmus University Medical Center, the Netherlands d.rizopoulos@erasmusmc.nl
More informationLikelihood-Based Methods
Likelihood-Based Methods Handbook of Spatial Statistics, Chapter 4 Susheela Singh September 22, 2016 OVERVIEW INTRODUCTION MAXIMUM LIKELIHOOD ESTIMATION (ML) RESTRICTED MAXIMUM LIKELIHOOD ESTIMATION (REML)
More information