Bayesian inference for multivariate extreme value distributions
|
|
- Brenda Blair
- 5 years ago
- Views:
Transcription
1 Bayesian inference for multivariate extreme value distributions Sebastian Engelke Clément Dombry, Marco Oesting Toronto, Fields Institute, May 4th, 2016
2 Main motivation For a parametric model Z F θ of multivariate max-stable distributions, the full LLHs are usually not available.
3 Main motivation For a parametric model Z F θ of multivariate max-stable distributions, the full LLHs are usually not available. Maximum likelihood estimation is infeasible.
4 Main motivation For a parametric model Z F θ of multivariate max-stable distributions, the full LLHs are usually not available. Maximum likelihood estimation is infeasible. Goal: Use full LLHs in Bayesian setup to improve frequentist efficiency,
5 Main motivation For a parametric model Z F θ of multivariate max-stable distributions, the full LLHs are usually not available. Maximum likelihood estimation is infeasible. Goal: Use full LLHs in Bayesian setup to improve frequentist efficiency, allow for Bayesian methods in multivariate extremes.
6 Max-stable distributions and partitions Let Z be a d-dimensional max-stable distribution and X 1,..., X n (with std. Fréchet margins) in its MDA.
7 Max-stable distributions and partitions Let Z be a d-dimensional max-stable distribution and X 1,..., X n (with std. Fréchet margins) in its MDA. Componentwise maxima M n = max n i=1 X i/n d Z.
8 Max-stable distributions and partitions Let Z be a d-dimensional max-stable distribution and X 1,..., X n (with std. Fréchet margins) in its MDA. Componentwise maxima M n = max n i=1 X i/n d Z. Partition Π n of the set {1,..., d} of occurrence times of maxima, i.e., j and k in the same set if M n,j and M n,k come from same X i Example: d = 4, n = 3, Π n = {{1}, {2, 3}, {4}}.
9 Max-stable distributions and partitions We have the weak limit as n (M n, Π n ) d (Z, Π), where Π is the limit partition of occurrence times of Z Example: d = 4, n = 3, Π n = {{1}, {2, 3}, {4}}.
10 Max-stable distributions and partitions We have the weak limit as n (M n, Π n ) d (Z, Π), where Π is the limit partition of occurrence times of Z. Π n and Π are distributions on the set P d of all partitions of {1,..., d} Example: d = 4, n = 3, Π n = {{1}, {2, 3}, {4}}.
11 Max-stable distributions and likelihoods P(Z z) = e V (z), where V is the exponent measure of Z.
12 Max-stable distributions and likelihoods P(Z z) = e V (z), where V is the exponent measure of Z. Likelihood of Z is L Z (z) = π P d L Z,Π (z, π), where the Stephenson Tawn LLH (2005, Biometrika) is joint LLH of Z and Π and has the explicit form π L Z,Π (z, π) = e V (z) { V π (j)(z)}, where V π (j) are partial derivatives of V in directions π (j), and the partition π = (π (1),..., π ( π ) ) consists of π sets. j=1
13 Max-stable distributions and likelihoods P(Z z) = e V (z), where V is the exponent measure of Z. Likelihood of Z is L Z (z) = π P d L Z,Π (z, π), where the Stephenson Tawn LLH (2005, Biometrika) is joint LLH of Z and Π and has the explicit form π L Z,Π (z, π) = e V (z) { V π (j)(z)}, where V π (j) are partial derivatives of V in directions π (j), and the partition π = (π (1),..., π ( π ) ) consists of π sets. Since P d is the dth Bell number (very large), the use of L Z (z) is infeasible. j=1
14 Parametric inference methods for Z F θ With information on partition (occurrence times observed): Stephenson and Tawn (2005, Biometrika) use partition information and the joint LLH L Z,Π (z, π). But: Possible bias if Π n Π.
15 Parametric inference methods for Z F θ With information on partition (occurrence times observed): Stephenson and Tawn (2005, Biometrika) use partition information and the joint LLH L Z,Π (z, π). But: Possible bias if Π n Π. Wadsworth (2015, Biometrika) proposes bias correction by using L Z,Πn (z, π).
16 Parametric inference methods for Z F θ With information on partition (occurrence times observed): Stephenson and Tawn (2005, Biometrika) use partition information and the joint LLH L Z,Π (z, π). But: Possible bias if Π n Π. Wadsworth (2015, Biometrika) proposes bias correction by using L Z,Πn (z, π). Without information on partition (occurrence times unobserved): Composite (pairwise) likelihood Padoan et al. (2010, JASA). But: Losses in efficiency.
17 Parametric inference methods for Z F θ With information on partition (occurrence times observed): Stephenson and Tawn (2005, Biometrika) use partition information and the joint LLH L Z,Π (z, π). But: Possible bias if Π n Π. Wadsworth (2015, Biometrika) proposes bias correction by using L Z,Πn (z, π). Without information on partition (occurrence times unobserved): Composite (pairwise) likelihood Padoan et al. (2010, JASA). But: Losses in efficiency. Dombry et al. (2013, Biometrika) introduces Gibbs sampler to obtain conditional partitions of L Π Z (π z).
18 Parametric inference methods for Z F θ With information on partition (occurrence times observed): Stephenson and Tawn (2005, Biometrika) use partition information and the joint LLH L Z,Π (z, π). But: Possible bias if Π n Π. Wadsworth (2015, Biometrika) proposes bias correction by using L Z,Πn (z, π). Without information on partition (occurrence times unobserved): Composite (pairwise) likelihood Padoan et al. (2010, JASA). But: Losses in efficiency. Dombry et al. (2013, Biometrika) introduces Gibbs sampler to obtain conditional partitions of L Π Z (π z). Thibaud et al. (2015, use this Gibbs sampler to choose Π automatically and obtain posterior L(θ z) in a Bayesian framework for Brown Resnick processes.
19 Goals Recall: MLE with L Z (z) is infeasible.
20 Goals Recall: MLE with L Z (z) is infeasible. Establish Bayesian framework which uses full LLHs for general max-stable distributions (without observed partition information).
21 Goals Recall: MLE with L Z (z) is infeasible. Establish Bayesian framework which uses full LLHs for general max-stable distributions (without observed partition information). Improve (frequentist) efficiency of estimates compared to existing methods (pairwise LLH,...)
22 Goals Recall: MLE with L Z (z) is infeasible. Establish Bayesian framework which uses full LLHs for general max-stable distributions (without observed partition information). Improve (frequentist) efficiency of estimates compared to existing methods (pairwise LLH,...) Without partition information.
23 Goals Recall: MLE with L Z (z) is infeasible. Establish Bayesian framework which uses full LLHs for general max-stable distributions (without observed partition information). Improve (frequentist) efficiency of estimates compared to existing methods (pairwise LLH,...) Without partition information. With partition information: ignore partition information.
24 Goals Recall: MLE with L Z (z) is infeasible. Establish Bayesian framework which uses full LLHs for general max-stable distributions (without observed partition information). Improve (frequentist) efficiency of estimates compared to existing methods (pairwise LLH,...) Without partition information. With partition information: ignore partition information. Explore applications of Bayesian methodology in multivariate extremes.
25 Bayesian framework Parametric model Z F θ with θ Θ. Observations from Z: Unobserved partitions: z 1,..., z n R d π 1,..., π n P d Model parameters: θ Θ
26 Bayesian framework Parametric model Z F θ with θ Θ. Observations from Z: Unobserved partitions: (latent variables) z 1,..., z n R d π 1,..., π n P d Model parameters: γ θ Θ Bayesian setup: Introduce prior γ on Θ and try to obtain the posterior distribution L(θ, π 1,..., π n z 1,... z n ).
27 Markov Chain Monte Carlo The posterior has to be evaluated by MCMC.
28 Markov Chain Monte Carlo The posterior has to be evaluated by MCMC. Gibbs sampler for partitions and Metropolis Hastings for θ, using L(θ, π 1,..., π n z 1,... z n ) γ(θ) γ(θ) n L(z i, π i ) i=1 π n i { i=1 j=1 V π (j) i } (z i )
29 Markov Chain Monte Carlo The posterior has to be evaluated by MCMC. Gibbs sampler for partitions and Metropolis Hastings for θ, using L(θ, π 1,..., π n z 1,... z n ) γ(θ) γ(θ) n L(z i, π i ) i=1 π n i { i=1 j=1 V π (j) i We need derivatives of the exponent measure V π (j)(z). } (z i )
30 Examples Logistic model with θ Θ = (0, 1): V π (j)(z) = θ 1 π(j) Γ( π(j) θ) Γ(1 θ) ( d k=1 z 1/θ k ) θ π (j) z 1 1/θ k k π (j)
31 Examples Logistic model with θ Θ = (0, 1): V π (j)(z) = θ 1 π(j) Γ( π(j) θ) Γ(1 θ) ( d k=1 z 1/θ k ) θ π (j) z 1 1/θ k k π (j) Many other models: Brown Resnick; cf. Thibaud et al. (2015, Biometrika) Extremal-t Reich Shaby Dirichlet...
32 Extremal dependence: max-stable data Simulate n = 100 samples z 1,... z n from the d-dim. max-stable logistic model for Z with parameter θ 0 {0.1, 0.7, 0.9}; see Dombry et al. (2016, Biometrika).
33 Extremal dependence: max-stable data Simulate n = 100 samples z 1,... z n from the d-dim. max-stable logistic model for Z with parameter θ 0 {0.1, 0.7, 0.9}; see Dombry et al. (2016, Biometrika). Run MC with uniform prior γ on (0, 1), and take empirical median of posterior L(θ z 1,... z n ) as point estimate ˆθ Bayes.
34 Extremal dependence: max-stable data Simulate n = 100 samples z 1,... z n from the d-dim. max-stable logistic model for Z with parameter θ 0 {0.1, 0.7, 0.9}; see Dombry et al. (2016, Biometrika). Run MC with uniform prior γ on (0, 1), and take empirical median of posterior L(θ z 1,... z n ) as point estimate ˆθ Bayes. 1 Theta estimates 10 Mean partition sizes Figure : Markov chains whose stationary distributions are the posterior of θ 0 (left) and the mean partition size (right); θ 0 = 0.1, d = 10.
35 Extremal dependence: max-stable data Simulate n = 100 samples z 1,... z n from the d-dim. max-stable logistic model for Z with parameter θ 0 {0.1, 0.7, 0.9}; see Dombry et al. (2016, Biometrika). Run MC with uniform prior γ on (0, 1), and take empirical median of posterior L(θ z 1,... z n ) as point estimate ˆθ Bayes. 1 Theta estimates 10 Mean partition sizes Figure : Markov chains whose stationary distributions are the posterior of θ 0 (left) and the mean partition size (right); θ 0 = 0.9, d = 10.
36 Extremal dependence: max-stable data Compare with pairwise composite LLH estimator ˆθ PL.
37 Extremal dependence: max-stable data Compare with pairwise composite LLH estimator ˆθ PL. θ 0 = 0.1 θ 0 = 0.7 θ 0 = 0.9 d Bias(ˆθ Bayes) s(ˆθ Bayes) Bias(ˆθ PL) s(ˆθ PL) Table : Sample bias and standard deviation of ˆθ Bayes and ˆθ PL, estimated from 1500 estimates; figures multiplied by Observations: Posterior median ˆθ Bayes is unbiased. Substantially reduced std. deviations by using full LLHs.
38 Extremal dependence: max-stable data θ 0 = 0.1 θ 0 = 0.7 θ 0 = 0.9 dimension d MSE(ˆθ Bayes) MSE(ˆθ PL) Table : Relative efficiencies (%) of ˆθ Bayes compared to ˆθ PL, estimated from 1500 estimates. Observations: Increasing efficiency gain for lower dependence and higher dimensions; see also Huser et al. (2015, Extremes)).
39 Extremal dependence: max-domain of attraction For i = 1,..., n, n = 100, simulate b = 50 samples x 1,i,... x b,i from outer power Clayton copula in MDA of logistic model with parameter θ 0.
40 Extremal dependence: max-domain of attraction For i = 1,..., n, n = 100, simulate b = 50 samples x 1,i,... x b,i from outer power Clayton copula in MDA of logistic model with parameter θ 0. Take block maxima z i = max j=1,...,b x j,i as input data.
41 Extremal dependence: max-domain of attraction For i = 1,..., n, n = 100, simulate b = 50 samples x 1,i,... x b,i from outer power Clayton copula in MDA of logistic model with parameter θ 0. Take block maxima z i = max j=1,...,b x j,i as input data. Compare ˆθ Bayes, ˆθ PL based on { z i } i, and ST estimator ˆθ ST and its bias-corrected version ˆθ W based on {( z i, π i )} i.
42 Extremal dependence: max-domain of attraction For i = 1,..., n, n = 100, simulate b = 50 samples x 1,i,... x b,i from outer power Clayton copula in MDA of logistic model with parameter θ 0. Take block maxima z i = max j=1,...,b x j,i as input data. Compare ˆθ Bayes, ˆθ PL based on { z i } i, and ST estimator ˆθ ST and its bias-corrected version ˆθ W based on {( z i, π i )} i. θ 0 = 0.1 θ 0 = 0.7 θ 0 = 0.9 dimension d ˆθ Bayes ˆθ PL ˆθ ST ˆθ W Table : Relative efficiencies as MSE(ˆθ Bayes)/MSE(ˆθ) (in %) for different estimators ˆθ; estimated from 1500 estimates. Observations: Efficiency gain by using full information remains large. Automatic choice of partition in ˆθ Bayes more robust than ST- likelihood with fixed partition; no bias correction needed.
43 More results and work in progress Estimation of marginal parameters: Simultaneous estimation of marginals and dependence parameters possible. Astonishing observation: This substantially reduces MSE for shape estimation (weak improvement for location and scale).
44 More results and work in progress Estimation of marginal parameters: Simultaneous estimation of marginals and dependence parameters possible. Astonishing observation: This substantially reduces MSE for shape estimation (weak improvement for location and scale). Bayesian hypothesis testing: Ex.: Testing asymptotic independence against dependence H 0 : θ 0 = 1 against H 1 = θ 0 (0, 1). Compare posterior probabilities P(H 0 z 1,..., z n ) and P(H 1 z 1,..., z n ). Many other tests are possible (symmetry, regression coefficients,...).
45 More results and work in progress Estimation of marginal parameters: Simultaneous estimation of marginals and dependence parameters possible. Astonishing observation: This substantially reduces MSE for shape estimation (weak improvement for location and scale). Bayesian hypothesis testing: Ex.: Testing asymptotic independence against dependence H 0 : θ 0 = 1 against H 1 = θ 0 (0, 1). Compare posterior probabilities P(H 0 z 1,..., z n ) and P(H 1 z 1,..., z n ). Many other tests are possible (symmetry, regression coefficients,...). Asymptotic limit results for posterior median.
46 References C. Dombry, S. Engelke, and M. Oesting. Exact simulation of max-stable processes. Biometrika, To appear. C. Dombry, F. Eyi-Minko, and M. Ribatet. Conditional simulation of max-stable processes. Biometrika, 100(1): , R. Huser, A. C. Davison, and M. G. Genton. Likelihood estimators for multivariate extremes. Extremes, 19:79 103, S. A. Padoan, M. Ribatet, and S. A. Sisson. Likelihood-based inference for max-stable processes. J. Amer. Statist. Assoc., 105: , A. Stephenson and J. A. Tawn. Exploiting occurrence times in likelihood inference for componentwise maxima. Biometrika, 92(1): , E. Thibaud, J. Aalto, D. S. Cooley, A. C. Davison, and J. Heikkinen. Bayesian inference for the Brown Resnick process, with an application to extreme low temperatures. Available from J. L. Wadsworth. On the occurrence times of componentwise maxima and bias in likelihood inference for multivariate max-stable distributions. Biometrika, 102: , 2015.
On the occurrence times of componentwise maxima and bias in likelihood inference for multivariate max-stable distributions
On the occurrence times of componentwise maxima and bias in likelihood inference for multivariate max-stable distributions J. L. Wadsworth Department of Mathematics and Statistics, Fylde College, Lancaster
More informationMonte Carlo in Bayesian Statistics
Monte Carlo in Bayesian Statistics Matthew Thomas SAMBa - University of Bath m.l.thomas@bath.ac.uk December 4, 2014 Matthew Thomas (SAMBa) Monte Carlo in Bayesian Statistics December 4, 2014 1 / 16 Overview
More informationBAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA
BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA Intro: Course Outline and Brief Intro to Marina Vannucci Rice University, USA PASI-CIMAT 04/28-30/2010 Marina Vannucci
More informationModels for Spatial Extremes. Dan Cooley Department of Statistics Colorado State University. Work supported in part by NSF-DMS
Models for Spatial Extremes Dan Cooley Department of Statistics Colorado State University Work supported in part by NSF-DMS 0905315 Outline 1. Introduction Statistics for Extremes Climate and Weather 2.
More informationApproximate Bayesian computation for spatial extremes via open-faced sandwich adjustment
Approximate Bayesian computation for spatial extremes via open-faced sandwich adjustment Ben Shaby SAMSI August 3, 2010 Ben Shaby (SAMSI) OFS adjustment August 3, 2010 1 / 29 Outline 1 Introduction 2 Spatial
More informationBayesian Inference and MCMC
Bayesian Inference and MCMC Aryan Arbabi Partly based on MCMC slides from CSC412 Fall 2018 1 / 18 Bayesian Inference - Motivation Consider we have a data set D = {x 1,..., x n }. E.g each x i can be the
More informationDown by the Bayes, where the Watermelons Grow
Down by the Bayes, where the Watermelons Grow A Bayesian example using SAS SUAVe: Victoria SAS User Group Meeting November 21, 2017 Peter K. Ott, M.Sc., P.Stat. Strategic Analysis 1 Outline 1. Motivating
More informationHigh-Order Composite Likelihood Inference for Max-Stable Distributions and Processes
Supplementary materials for this article are available online. Please go to www.tandfonline.com/r/jcgs High-Order Composite Likelihood Inference for Max-Stable Distributions and Processes Stefano CASTRUCCIO,
More informationBayesian Modeling of Air Pollution Extremes Using Nested Multivariate Max-Stable Processes
Bayesian Modeling of Air Pollution Extremes Using Nested Multivariate Max-Stable Processes arxiv:184.4588v1 [stat.ap] 18 Mar 218 Sabrina Vettori 1, Raphaël Huser 1, and Marc G. Genton 1 April 13, 218 Abstract
More informationExtreme value statistics: from one dimension to many. Lecture 1: one dimension Lecture 2: many dimensions
Extreme value statistics: from one dimension to many Lecture 1: one dimension Lecture 2: many dimensions The challenge for extreme value statistics right now: to go from 1 or 2 dimensions to 50 or more
More informationExtreme Value Analysis and Spatial Extremes
Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationMarginal Specifications and a Gaussian Copula Estimation
Marginal Specifications and a Gaussian Copula Estimation Kazim Azam Abstract Multivariate analysis involving random variables of different type like count, continuous or mixture of both is frequently required
More informationBayesian Modelling of Extreme Rainfall Data
Bayesian Modelling of Extreme Rainfall Data Elizabeth Smith A thesis submitted for the degree of Doctor of Philosophy at the University of Newcastle upon Tyne September 2005 UNIVERSITY OF NEWCASTLE Bayesian
More information(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis
Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More informationBayesian Networks in Educational Assessment
Bayesian Networks in Educational Assessment Estimating Parameters with MCMC Bayesian Inference: Expanding Our Context Roy Levy Arizona State University Roy.Levy@asu.edu 2017 Roy Levy MCMC 1 MCMC 2 Posterior
More informationMCMC algorithms for fitting Bayesian models
MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models
More informationFractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling
Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling Jae-Kwang Kim 1 Iowa State University June 26, 2013 1 Joint work with Shu Yang Introduction 1 Introduction
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationMarkov Chain Monte Carlo, Numerical Integration
Markov Chain Monte Carlo, Numerical Integration (See Statistics) Trevor Gallen Fall 2015 1 / 1 Agenda Numerical Integration: MCMC methods Estimating Markov Chains Estimating latent variables 2 / 1 Numerical
More informationStatistical Methods in Particle Physics Lecture 1: Bayesian methods
Statistical Methods in Particle Physics Lecture 1: Bayesian methods SUSSP65 St Andrews 16 29 August 2009 Glen Cowan Physics Department Royal Holloway, University of London g.cowan@rhul.ac.uk www.pp.rhul.ac.uk/~cowan
More informationContents. Part I: Fundamentals of Bayesian Inference 1
Contents Preface xiii Part I: Fundamentals of Bayesian Inference 1 1 Probability and inference 3 1.1 The three steps of Bayesian data analysis 3 1.2 General notation for statistical inference 4 1.3 Bayesian
More informationMarkov Chain Monte Carlo
Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).
More informationA space-time skew-t model for threshold exceedances
BIOMETRICS 000, 1 20 DOI: 000 000 0000 A space-time skew-t model for threshold exceedances Samuel A Morris 1,, Brian J Reich 1, Emeric Thibaud 2, and Daniel Cooley 2 1 Department of Statistics, North Carolina
More informationMCMC and Gibbs Sampling. Kayhan Batmanghelich
MCMC and Gibbs Sampling Kayhan Batmanghelich 1 Approaches to inference l Exact inference algorithms l l l The elimination algorithm Message-passing algorithm (sum-product, belief propagation) The junction
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning MCMC and Non-Parametric Bayes Mark Schmidt University of British Columbia Winter 2016 Admin I went through project proposals: Some of you got a message on Piazza. No news is
More informationThe extremal elliptical model: Theoretical properties and statistical inference
1/25 The extremal elliptical model: Theoretical properties and statistical inference Thomas OPITZ Supervisors: Jean-Noel Bacro, Pierre Ribereau Institute of Mathematics and Modeling in Montpellier (I3M)
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As
More informationParameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn
Parameter estimation and forecasting Cristiano Porciani AIfA, Uni-Bonn Questions? C. Porciani Estimation & forecasting 2 Temperature fluctuations Variance at multipole l (angle ~180o/l) C. Porciani Estimation
More informationModelling spatial extremes using max-stable processes
Chapter 1 Modelling spatial extremes using max-stable processes Abstract This chapter introduces the theory related to the statistical modelling of spatial extremes. The presented modelling strategy heavily
More informationDensity Estimation. Seungjin Choi
Density Estimation Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr http://mlg.postech.ac.kr/
More informationBayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence
Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns
More informationNonparametric Bayesian Methods (Gaussian Processes)
[70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent
More informationLearning the hyper-parameters. Luca Martino
Learning the hyper-parameters Luca Martino 2017 2017 1 / 28 Parameters and hyper-parameters 1. All the described methods depend on some choice of hyper-parameters... 2. For instance, do you recall λ (bandwidth
More informationBayesian Statistical Methods. Jeff Gill. Department of Political Science, University of Florida
Bayesian Statistical Methods Jeff Gill Department of Political Science, University of Florida 234 Anderson Hall, PO Box 117325, Gainesville, FL 32611-7325 Voice: 352-392-0262x272, Fax: 352-392-8127, Email:
More informationBayesian nonparametrics for multivariate extremes including censored data. EVT 2013, Vimeiro. Anne Sabourin. September 10, 2013
Bayesian nonparametrics for multivariate extremes including censored data Anne Sabourin PhD advisors: Anne-Laure Fougères (Lyon 1), Philippe Naveau (LSCE, Saclay). Joint work with Benjamin Renard, IRSTEA,
More informationMonte Carlo Methods. Leon Gu CSD, CMU
Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte
More informationMCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17
MCMC for big data Geir Storvik BigInsight lunch - May 2 2018 Geir Storvik MCMC for big data BigInsight lunch - May 2 2018 1 / 17 Outline Why ordinary MCMC is not scalable Different approaches for making
More informationMarkov Chain Monte Carlo
1 Motivation 1.1 Bayesian Learning Markov Chain Monte Carlo Yale Chang In Bayesian learning, given data X, we make assumptions on the generative process of X by introducing hidden variables Z: p(z): prior
More informationMCMC: Markov Chain Monte Carlo
I529: Machine Learning in Bioinformatics (Spring 2013) MCMC: Markov Chain Monte Carlo Yuzhen Ye School of Informatics and Computing Indiana University, Bloomington Spring 2013 Contents Review of Markov
More informationConditional simulations of max-stable processes
Conditional simulations of max-stable processes C. Dombry, F. Éyi-Minko, M. Ribatet Laboratoire de Mathématiques et Application, Université de Poitiers Institut de Mathématiques et de Modélisation, Université
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationDepartamento de Economía Universidad de Chile
Departamento de Economía Universidad de Chile GRADUATE COURSE SPATIAL ECONOMETRICS November 14, 16, 17, 20 and 21, 2017 Prof. Henk Folmer University of Groningen Objectives The main objective of the course
More informationLarge Scale Bayesian Inference
Large Scale Bayesian I in Cosmology Jens Jasche Garching, 11 September 2012 Introduction Cosmography 3D density and velocity fields Power-spectra, bi-spectra Dark Energy, Dark Matter, Gravity Cosmological
More information7. Estimation and hypothesis testing. Objective. Recommended reading
7. Estimation and hypothesis testing Objective In this chapter, we show how the election of estimators can be represented as a decision problem. Secondly, we consider the problem of hypothesis testing
More informationBayesian Phylogenetics:
Bayesian Phylogenetics: an introduction Marc A. Suchard msuchard@ucla.edu UCLA Who is this man? How sure are you? The one true tree? Methods we ve learned so far try to find a single tree that best describes
More informationBayesian GLMs and Metropolis-Hastings Algorithm
Bayesian GLMs and Metropolis-Hastings Algorithm We have seen that with conjugate or semi-conjugate prior distributions the Gibbs sampler can be used to sample from the posterior distribution. In situations,
More informationBayesian Inference for Clustered Extremes
Newcastle University, Newcastle-upon-Tyne, U.K. lee.fawcett@ncl.ac.uk 20th TIES Conference: Bologna, Italy, July 2009 Structure of this talk 1. Motivation and background 2. Review of existing methods Limitations/difficulties
More informationAnswers and expectations
Answers and expectations For a function f(x) and distribution P(x), the expectation of f with respect to P is The expectation is the average of f, when x is drawn from the probability distribution P E
More informationSTAT 425: Introduction to Bayesian Analysis
STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte
More informationOn Markov chain Monte Carlo methods for tall data
On Markov chain Monte Carlo methods for tall data Remi Bardenet, Arnaud Doucet, Chris Holmes Paper review by: David Carlson October 29, 2016 Introduction Many data sets in machine learning and computational
More informationMarkov Networks.
Markov Networks www.biostat.wisc.edu/~dpage/cs760/ Goals for the lecture you should understand the following concepts Markov network syntax Markov network semantics Potential functions Partition function
More informationDavid Giles Bayesian Econometrics
David Giles Bayesian Econometrics 1. General Background 2. Constructing Prior Distributions 3. Properties of Bayes Estimators and Tests 4. Bayesian Analysis of the Multiple Regression Model 5. Bayesian
More informationEstimation of spatial max-stable models using threshold exceedances
Estimation of spatial max-stable models using threshold exceedances arxiv:1205.1107v1 [stat.ap] 5 May 2012 Jean-Noel Bacro I3M, Université Montpellier II and Carlo Gaetan DAIS, Università Ca Foscari -
More informationSTA 294: Stochastic Processes & Bayesian Nonparametrics
MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a
More informationStat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC
Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline
More informationTime Series and Dynamic Models
Time Series and Dynamic Models Section 1 Intro to Bayesian Inference Carlos M. Carvalho The University of Texas at Austin 1 Outline 1 1. Foundations of Bayesian Statistics 2. Bayesian Estimation 3. The
More informationBayesian Inference in Astronomy & Astrophysics A Short Course
Bayesian Inference in Astronomy & Astrophysics A Short Course Tom Loredo Dept. of Astronomy, Cornell University p.1/37 Five Lectures Overview of Bayesian Inference From Gaussians to Periodograms Learning
More informationNon parametric modeling of multivariate extremes with Dirichlet mixtures
Non parametric modeling of multivariate extremes with Dirichlet mixtures Anne Sabourin Institut Mines-Télécom, Télécom ParisTech, CNRS-LTCI Joint work with Philippe Naveau (LSCE, Saclay), Anne-Laure Fougères
More informationThe Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations
The Mixture Approach for Simulating New Families of Bivariate Distributions with Specified Correlations John R. Michael, Significance, Inc. and William R. Schucany, Southern Methodist University The mixture
More informationProbabilistic Machine Learning
Probabilistic Machine Learning Bayesian Nets, MCMC, and more Marek Petrik 4/18/2017 Based on: P. Murphy, K. (2012). Machine Learning: A Probabilistic Perspective. Chapter 10. Conditional Independence Independent
More informationCOS513 LECTURE 8 STATISTICAL CONCEPTS
COS513 LECTURE 8 STATISTICAL CONCEPTS NIKOLAI SLAVOV AND ANKUR PARIKH 1. MAKING MEANINGFUL STATEMENTS FROM JOINT PROBABILITY DISTRIBUTIONS. A graphical model (GM) represents a family of probability distributions
More informationSTAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01
STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01 Nasser Sadeghkhani a.sadeghkhani@queensu.ca There are two main schools to statistical inference: 1-frequentist
More informationMARKOV CHAIN MONTE CARLO
MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with
More informationBayesian Estimation with Sparse Grids
Bayesian Estimation with Sparse Grids Kenneth L. Judd and Thomas M. Mertens Institute on Computational Economics August 7, 27 / 48 Outline Introduction 2 Sparse grids Construction Integration with sparse
More informationMultilevel Statistical Models: 3 rd edition, 2003 Contents
Multilevel Statistical Models: 3 rd edition, 2003 Contents Preface Acknowledgements Notation Two and three level models. A general classification notation and diagram Glossary Chapter 1 An introduction
More informationBayesian Nonparametric Regression for Diabetes Deaths
Bayesian Nonparametric Regression for Diabetes Deaths Brian M. Hartman PhD Student, 2010 Texas A&M University College Station, TX, USA David B. Dahl Assistant Professor Texas A&M University College Station,
More informationPattern Recognition and Machine Learning
Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability
More informationBayesian Estimation of Input Output Tables for Russia
Bayesian Estimation of Input Output Tables for Russia Oleg Lugovoy (EDF, RANE) Andrey Polbin (RANE) Vladimir Potashnikov (RANE) WIOD Conference April 24, 2012 Groningen Outline Motivation Objectives Bayesian
More informationLearning Sequence Motif Models Using Expectation Maximization (EM) and Gibbs Sampling
Learning Sequence Motif Models Using Expectation Maximization (EM) and Gibbs Sampling BMI/CS 776 www.biostat.wisc.edu/bmi776/ Spring 009 Mark Craven craven@biostat.wisc.edu Sequence Motifs what is a sequence
More informationSpatial extreme value theory and properties of max-stable processes Poitiers, November 8-10, 2012
Spatial extreme value theory and properties of max-stable processes Poitiers, November 8-10, 2012 November 8, 2012 15:00 Clement Dombry Habilitation thesis defense (in french) 17:00 Snack buet November
More informationThe Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.
Christian P. Robert The Bayesian Choice From Decision-Theoretic Foundations to Computational Implementation Second Edition With 23 Illustrations ^Springer" Contents Preface to the Second Edition Preface
More informationReview. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with
More informationComputer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo
Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain
More informationModelling Multivariate Peaks-over-Thresholds using Generalized Pareto Distributions
Modelling Multivariate Peaks-over-Thresholds using Generalized Pareto Distributions Anna Kiriliouk 1 Holger Rootzén 2 Johan Segers 1 Jennifer L. Wadsworth 3 1 Université catholique de Louvain (BE) 2 Chalmers
More informationA Bayesian perspective on GMM and IV
A Bayesian perspective on GMM and IV Christopher A. Sims Princeton University sims@princeton.edu November 26, 2013 What is a Bayesian perspective? A Bayesian perspective on scientific reporting views all
More informationVCMC: Variational Consensus Monte Carlo
VCMC: Variational Consensus Monte Carlo Maxim Rabinovich, Elaine Angelino, Michael I. Jordan Berkeley Vision and Learning Center September 22, 2015 probabilistic models! sky fog bridge water grass object
More informationModelling Operational Risk Using Bayesian Inference
Pavel V. Shevchenko Modelling Operational Risk Using Bayesian Inference 4y Springer 1 Operational Risk and Basel II 1 1.1 Introduction to Operational Risk 1 1.2 Defining Operational Risk 4 1.3 Basel II
More informationGraphical Models and Kernel Methods
Graphical Models and Kernel Methods Jerry Zhu Department of Computer Sciences University of Wisconsin Madison, USA MLSS June 17, 2014 1 / 123 Outline Graphical Models Probabilistic Inference Directed vs.
More informationBayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007
Bayesian inference Fredrik Ronquist and Peter Beerli October 3, 2007 1 Introduction The last few decades has seen a growing interest in Bayesian inference, an alternative approach to statistical inference.
More informationApril 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning
for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions
More informationST 740: Markov Chain Monte Carlo
ST 740: Markov Chain Monte Carlo Alyson Wilson Department of Statistics North Carolina State University October 14, 2012 A. Wilson (NCSU Stsatistics) MCMC October 14, 2012 1 / 20 Convergence Diagnostics:
More informationBayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference
Bayesian Inference for Discretely Sampled Diffusion Processes: A New MCMC Based Approach to Inference Osnat Stramer 1 and Matthew Bognar 1 Department of Statistics and Actuarial Science, University of
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More informationBayesian inference for factor scores
Bayesian inference for factor scores Murray Aitkin and Irit Aitkin School of Mathematics and Statistics University of Newcastle UK October, 3 Abstract Bayesian inference for the parameters of the factor
More informationA HIERARCHICAL MAX-STABLE SPATIAL MODEL FOR EXTREME PRECIPITATION
Submitted to the Annals of Applied Statistics arxiv: arxiv:0000.0000 A HIERARCHICAL MAX-STABLE SPATIAL MODEL FOR EXTREME PRECIPITATION By Brian J. Reich and Benjamin A. Shaby North Carolina State University
More informationBayesian model selection in graphs by using BDgraph package
Bayesian model selection in graphs by using BDgraph package A. Mohammadi and E. Wit March 26, 2013 MOTIVATION Flow cytometry data with 11 proteins from Sachs et al. (2005) RESULT FOR CELL SIGNALING DATA
More informationPaul Karapanagiotidis ECO4060
Paul Karapanagiotidis ECO4060 The way forward 1) Motivate why Markov-Chain Monte Carlo (MCMC) is useful for econometric modeling 2) Introduce Markov-Chain Monte Carlo (MCMC) - Metropolis-Hastings (MH)
More informationTutorial on Probabilistic Programming with PyMC3
185.A83 Machine Learning for Health Informatics 2017S, VU, 2.0 h, 3.0 ECTS Tutorial 02-04.04.2017 Tutorial on Probabilistic Programming with PyMC3 florian.endel@tuwien.ac.at http://hci-kdd.org/machine-learning-for-health-informatics-course
More informationFractional Imputation in Survey Sampling: A Comparative Review
Fractional Imputation in Survey Sampling: A Comparative Review Shu Yang Jae-Kwang Kim Iowa State University Joint Statistical Meetings, August 2015 Outline Introduction Fractional imputation Features Numerical
More informationBayesian spatial hierarchical modeling for temperature extremes
Bayesian spatial hierarchical modeling for temperature extremes Indriati Bisono Dr. Andrew Robinson Dr. Aloke Phatak Mathematics and Statistics Department The University of Melbourne Maths, Informatics
More informationSTATS 200: Introduction to Statistical Inference. Lecture 29: Course review
STATS 200: Introduction to Statistical Inference Lecture 29: Course review Course review We started in Lecture 1 with a fundamental assumption: Data is a realization of a random process. The goal throughout
More informationSequential Monte Carlo Methods for Bayesian Computation
Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter
More informationBayesian modelling. Hans-Peter Helfrich. University of Bonn. Theodor-Brinkmann-Graduate School
Bayesian modelling Hans-Peter Helfrich University of Bonn Theodor-Brinkmann-Graduate School H.-P. Helfrich (University of Bonn) Bayesian modelling Brinkmann School 1 / 22 Overview 1 Bayesian modelling
More informationOn Bayesian Computation
On Bayesian Computation Michael I. Jordan with Elaine Angelino, Maxim Rabinovich, Martin Wainwright and Yun Yang Previous Work: Information Constraints on Inference Minimize the minimax risk under constraints
More informationEM Algorithm II. September 11, 2018
EM Algorithm II September 11, 2018 Review EM 1/27 (Y obs, Y mis ) f (y obs, y mis θ), we observe Y obs but not Y mis Complete-data log likelihood: l C (θ Y obs, Y mis ) = log { f (Y obs, Y mis θ) Observed-data
More informationA Bayesian Approach to Phylogenetics
A Bayesian Approach to Phylogenetics Niklas Wahlberg Based largely on slides by Paul Lewis (www.eeb.uconn.edu) An Introduction to Bayesian Phylogenetics Bayesian inference in general Markov chain Monte
More information