Example: Ground Motion Attenuation

Similar documents
Application of Stochastic Simulation Methods to System Identification

F denotes cumulative density. denotes probability density function; (.)

Bayesian Inference and MCMC

A Probabilistic Framework for solving Inverse Problems. Lambros S. Katafygiotis, Ph.D.

Bayesian Estimation of Input Output Tables for Russia

Risk Estimation and Uncertainty Quantification by Markov Chain Monte Carlo Methods

Bayesian Methods for Machine Learning

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Markov Chain Monte Carlo, Numerical Integration

Down by the Bayes, where the Watermelons Grow

Bayesian Regression Linear and Logistic Regression

Making rating curves - the Bayesian approach

Bayesian System Identification and Response Predictions Robust to Modeling Uncertainty

ST 740: Markov Chain Monte Carlo

Statistics & Data Sciences: First Year Prelim Exam May 2018

MCMC algorithms for fitting Bayesian models

Advances and Applications in Perfect Sampling

Probabilistic Graphical Networks: Definitions and Basic Results

Tutorial on Probabilistic Programming with PyMC3

Markov Chain Monte Carlo Methods

Bayesian Methods in Multilevel Regression

Monte Carlo methods for sampling-based Stochastic Optimization

Markov Chain Monte Carlo methods

(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis

Doing Bayesian Integrals

CALIFORNIA INSTITUTE OF TECHNOLOGY

Probabilistic Machine Learning

Pattern Recognition and Machine Learning. Bishop Chapter 11: Sampling Methods

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

MCMC: Markov Chain Monte Carlo

Approximate Bayesian Computation: a simulation based approach to inference

Metropolis-Hastings Algorithm

The Metropolis-Hastings Algorithm. June 8, 2012

Principles of Bayesian Inference

Bayesian Inference in Astronomy & Astrophysics A Short Course

Lecture 6: Markov Chain Monte Carlo

Introduction to Machine Learning CMU-10701

Monte Carlo in Bayesian Statistics

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Stat 535 C - Statistical Computing & Monte Carlo Methods. Lecture February Arnaud Doucet

arxiv: v1 [stat.co] 23 Apr 2018

General Construction of Irreversible Kernel in Markov Chain Monte Carlo

Metropolis Hastings. Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601. Module 9

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Computational statistics

Likelihood-free MCMC

ACCOUNTING FOR INPUT-MODEL AND INPUT-PARAMETER UNCERTAINTIES IN SIMULATION. < May 22, 2006

ASYMPTOTICALLY INDEPENDENT MARKOV SAMPLING: A NEW MARKOV CHAIN MONTE CARLO SCHEME FOR BAYESIAN INFERENCE

Introduction to Markov Chain Monte Carlo & Gibbs Sampling

Transitional Markov Chain Monte Carlo: Observations and Improvements

Bayesian Inference for DSGE Models. Lawrence J. Christiano

QTL model selection: key players

Keywords: Thomas Bayes; information; model updating; uncertainty. 1.1 Thomas Bayes and Bayesian Methods in Engineering

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

The Bayesian Choice. Christian P. Robert. From Decision-Theoretic Foundations to Computational Implementation. Second Edition.

STA 4273H: Statistical Machine Learning

Markov Chain Monte Carlo methods

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS

Lecture 7 and 8: Markov Chain Monte Carlo

Bayesian Phylogenetics:

Assessment of the South African anchovy resource using data from : posterior distributions for the two base case hypotheses

Markov Chain Monte Carlo

Forecast combination and model averaging using predictive measures. Jana Eklund and Sune Karlsson Stockholm School of Economics

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

an introduction to bayesian inference

MCMC for big data. Geir Storvik. BigInsight lunch - May Geir Storvik MCMC for big data BigInsight lunch - May / 17

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Pseudo-marginal MCMC methods for inference in latent variable models

A note on Reversible Jump Markov Chain Monte Carlo

Session 3A: Markov chain Monte Carlo (MCMC)

POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL

Stat 535 C - Statistical Computing & Monte Carlo Methods. Arnaud Doucet.

Who was Bayes? Bayesian Phylogenetics. What is Bayes Theorem?

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods: Markov Chain Monte Carlo

On the Optimal Scaling of the Modified Metropolis-Hastings algorithm

CPSC 540: Machine Learning

Bayesian Phylogenetics

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Advanced Statistical Modelling

Point spread function reconstruction from the image of a sharp edge

ComputationalToolsforComparing AsymmetricGARCHModelsviaBayes Factors. RicardoS.Ehlers

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Accept-Reject Metropolis-Hastings Sampling and Marginal Likelihood Estimation

Bayesian Additive Regression Tree (BART) with application to controlled trail data analysis

MODIFIED METROPOLIS-HASTINGS ALGORITHM WITH DELAYED REJECTION FOR HIGH-DIMENSIONAL RELIABILITY ANALYSIS

Reducing The Computational Cost of Bayesian Indoor Positioning Systems

Dynamic System Identification using HDMR-Bayesian Technique

Contents. Part I: Fundamentals of Bayesian Inference 1

Markov chain Monte Carlo

Part 1: Expectation Propagation

Estimation of Operational Risk Capital Charge under Parameter Uncertainty

BAYESIAN METHODS FOR VARIABLE SELECTION WITH APPLICATIONS TO HIGH-DIMENSIONAL DATA

Bayesian inference for multivariate extreme value distributions

13 Notes on Markov Chain Monte Carlo

A Bayesian Approach to Phylogenetics

David Giles Bayesian Econometrics

Reminder of some Markov Chain properties:

Bayesian System Identification based on Hierarchical Sparse Bayesian Learning and Gibbs Sampling with Application to Structural Damage Assessment

Calibration of Stochastic Volatility Models using Particle Markov Chain Monte Carlo Methods

Monte Carlo Methods. Leon Gu CSD, CMU

Transcription:

Example: Ground Motion Attenuation Problem: Predict the probability distribution for Peak Ground Acceleration (PGA), the level of ground shaking caused by an earthquake Earthquake records are used to update the predictive probability for PGA based on earthquake magnitude M, distance to the fault d, and local soil conditions G= ( G, G ) (soil classified A, B or C) Well-known attenuation relation developed by Boore, Joyner and 2 2 Fumal (BJF, 1993). Let R= d + h, h= fictitious depth: 2 ( ) ( ) ( ) ( ) log PGA = b + b M 6 + b M 6 + b R + b log R + b G + b G + ε 10 1 2 3 4 5 10 6 B 7 qmdgbh (,, ;, ) + ε (where b= ( b1,..., b7)) ε = uncertain prediction error modelled as (0, σ) (max. entropy model) B C C

Example: Ground Motion Attenuation Goal: Estimate p( PGA U, D, M) for input U = ( M, d, G) D Available set of data where U = ( M, d, G ) n n n n (magnitude, distance, soil conditions), and corresponding Yn = ( PGA) n are data from earthquakes at various sites (we use =271 ground motion records from 20 earthquakes) Model class M is from BJF model with specified prior PDF over the model parameters θ = (,, bhσ) Robust posterior predictive probability model: (,D, ) = (, ) (, ) M D M = { U, Y : n= 1,..., } ppgauθ (, ) n p PGA U p PGA U θ p θ dθ n

Bayesian Updating Bayes Theorem: p θ D, = cp θ M M p Yn Un, θ n= 1 Computing Optimal Posterior Predictive Model ( ) ( ) ( ) Find optimal (most probable values) of parameters which 1 maximize, then to (Laplace s asymptotic approx.): ( D M) p θ, (, DM, ) ppgauθ (, ˆ) ppgau O( ) Assumes M is globally identifiable on ( is unique) and need a large amount of data for an acceptable approximation (updated PDF will then have single sharp peak). Then no need to evaluate and is insensitive to the choice of prior PDF θˆ Parameter estimation (i.e. using ) is reasonable only under these conditions; otherwise, spurious reduction in uncertainty in predictions θˆ D ˆ θ θˆ c

Bayesian Updating (Continued) Computing Robust Posterior Predictive Model c ormalizing constant, and p θ,, is difficult to evaluate However, if we can generate M samples from p( θ D ), M, then we can approximate the robust PDF by the corresponding sample mean: (, D, ) = (, ) (, ) M D M p PGA U p PGA U θ p θ dθ 1 M M k= 1 ( ) D M { θ, k = 1,, M } k (, θ ) k ppgau These samples can be obtained using stochastic simulation methods, e.g. Gibbs sampler, Metropolis-Hastings (more later)

Sampling Posterior (Updated) PDF Samples generated using Markov Chain Monte Carlo Prior PDF chosen to reflect knowledge (or lack thereof) For the regression coefficients b, take i.i.d. (0,10), i.e. each has zero mean and standard deviation of 10 (very flat) For the depth parameter h (km), lognormal distribution with mean and variance based on the depth of earthquakes in the data set b i s h

MCMC Samples from Posterior PDF Model Class 4 b 1 b 2 b 5 b 6 b 7 h

Posterior Samples: Model Class 4 b 2 b 5 b 6 b 7 h b 1 b 2 b 5 x BJF 93 < 1 σ < 2 σ > 2 σ b 6 b 7

Optimal vs. Robust Predictive Analysis Results using MCMC samples for robust PDF compared to results for optimal PDF computed from values reported in BJF 93 User input U=(M, d, G) Magnitude 7.0 Distance to fault 30km Site geology is stiff soil

Posterior Predictive Analysis 1971 San Fernando M 6.6 JPL Building 180 23.7 km 1987 Whittier arrows M 6.1 1488 Old House Rd 19.0 km 1991 Sierra Madre M 5.8 535 S Wilson Ave 18.1 km 1994 orthridge M 6.7 1150 Sierra Madre Villa 36.2 km

Model Class Selection Bayesian model class selection (Beck and Yuen 2004) for set M of candidate models p( D Mi) P( Mi M) P( M D,M) =, i= 1,..., I i I j= 1 p ( D M j) P( M j M) The evidence for model class M i is given by p ( D M ) p( D M, θ ) p( θ M ) dθ = i i i i i i Evaluating the evidence directly by stochastic simulation would require sampling from the prior, which is typically inefficient

Model Class Selection (Continued) The evidence may expressed as ( D Mi ) p( D M θ ) p( θ M ) p( θ M D) dθ H p( θ M D) ln p = ln i, i i i i i, i + i i, First term may be approximated from MCMC samples k = 1 ( D M ) ( M ) ( M D) ln p i, θi p θi i p θi i, dθi 1 ln p( D M, ˆ ) ( ˆ ) i θk p θ k Mi umerous methods for approximating information entropy from samples

Model Class Selection (Continued) ew result generalizes to any model class the conclusion by Beck and Yuen (2004) made for a globally identifiable model class using an asymptotic approximation of the evidence for the model class: ( D Mi ) ( D M ) ( M ) ( M D) p( θ M D) p( θ M D) dθ ln p = ln p, θ p θ p θ, dθ ln,, i i i i i i i i i i i i ( θ i Mi, D) p ( θ M ) p = ln p ( D Mi, θ i) p( θi Mi, D) dθi ln p( θi Mi, D) dθi i i = Data Fit - Expected Information Gained from Data

Model Class Selection ( ) ( ) ( ) ( ) 2 2 2 10 PGA = b1 + b2 M + b3 M + b4 R + b5 10 R + b6gb + b7gc + ε R = d + h log 6 6 log, Model Class b 1 b 2 b 3 b 4 b 5 b 6 b 7 h σ Prob. (%) BJF 93-0.105 0.229-0.778 0.162 0.251 5.570 0.230 Model 1 0.312 (0.327) 0.274 (0.034) -0.082 (0.039) 0.003 (0.002) -1.079 (0.252) 0.169 0.242 (0.038) 8.668 (2.631) 0.206 0.01 Model 2 0.276 (0.326) 0.223 (0.023) 0.003 (0.002) -1.079 (0.254) 0.174 0.264 (0.037) 8.424 (2.534) 0.207 0.18 Model 3-0.076 (0.103) 0.274 (0.034) -0.076 (0.040) -0.759 (0.064) 0.165 0.241 (0.038) 6.031 (1.510) 0.206 0.07 Model 4-0.056 (0.108) 0.226 (0.023) -0.803 (0.063) 0.172 0.262 6.336 (1.162) 0.207 2.12 Model 5-0.896 (0.043) 0.222 (0.026) -0.008 (0.001) 0.143 (0.039) 0.264 (0.041) 1.936 (1.664) 0.230 0.00 Model 6 0.255 (0.114) 0.227 (0.025) -0.888 (0.072) 7.026 (1.720) 0.225 0.00 Model 7 0.230 (0.022) -0.834 (0.021) 0.163 (0.033) 0.251 (0.034) 6.834 (1.041) 0.207 97.62

Comparison by Mean Data Fit Only ( ) ( ) ( ) ( ) 2 2 2 10 PGA = b1 + b2 M + b3 M + b4 R + b5 10 R + b6gb + b7gc + ε R = d + h log 6 6 log, Model Class b 1 b 2 b 3 b 4 b 5 b 6 b 7 h σ Data Fit (%) BJF 93-0.105 0.229-0.778 0.162 0.251 5.570 0.230 Model 1 0.312 (0.327) 0.274 (0.034) -0.082 (0.039) 0.003 (0.002) -1.079 (0.252) 0.169 0.242 (0.038) 8.668 (2.631) 0.206 40.67 Model 2 0.276 (0.326) 0.223 (0.023) 0.003 (0.002) -1.079 (0.254) 0.174 0.264 (0.037) 8.424 (2.534) 0.207 6.62 Model 3-0.076 (0.103) 0.274 (0.034) -0.076 (0.040) -0.759 (0.064) 0.165 0.241 (0.038) 6.031 (1.510) 0.206 35.61 Model 4-0.056 (0.108) 0.226 (0.023) -0.803 (0.063) 0.172 0.262 6.336 (1.162) 0.207 7.58 Model 5-0.896 (0.043) 0.222 (0.026) -0.008 (0.001) 0.143 (0.039) 0.264 (0.041) 1.936 (1.664) 0.230 0.00 Model 6 0.255 (0.114) 0.227 (0.025) -0.888 (0.072) 7.026 (1.720) 0.225 0.00 Model 7 0.230 (0.022) -0.834 (0.021) 0.163 (0.033) 0.251 (0.034) 6.834 (1.041) 0.207 9.52

Posterior Samples : Model Class 7 b 5 b 6 b 7 h b 2 b 5 x BJF 93 < 1 σ < 2 σ > 2 σ b 6 b 7