An adaptive kriging method for characterizing uncertainty in inverse problems
|
|
- Margery Goodman
- 5 years ago
- Views:
Transcription
1 Int Statistical Inst: Proc 58th World Statistical Congress, 2, Dublin Session STS2) p98 An adaptive kriging method for characterizing uncertainty in inverse problems FU Shuai 2 University Paris-Sud & INRIA, Department of Mathematics 945 Orsay, France fshvip@gmailcom Couplet Mathieu 2, Bousquet Nicolas 2 2 EDF R&D, Department of Industrial Risk Management 6 quai Watier 784 Chatou, France mathieucouplet@edffr, nicolasbousquet@edffr Industrial context of the model Our present work is based on a simplified model which is related to the risk of dyke overflow during a flood We want to predict the water level in this case, with a number of variables availables : the observed quantities are the water level at the dyke position Z c, the speed of the river V and the observed flow of river Q The unobservable quantities are the value of Strickler coefficient K s as well as the river bed level at the dyke Z v In order to predict the future water level, we need to quantify the non-observed variables K s and Z v In this flooding model, if we define the data vector Y = Z c, V ) T R 2 ; the non-observed vector X = K s, Z v ) T R 2 ; the observed variable d = Q R, we can rewrite the measured observation Y as a function of the non-observed vector X, adding an error U : Y = Z c, V ) T + U = HK s, Z v ; Q) + U In general, we consider the following model for the uncertainty treatment: ) Y i = HX i, d i ) + U i, i n) with n the sample size, and Y i ) R p denotes the data vectors; H denotes a known function from R q+q 2 to R p which is expensive in CPU-time consumption and often regarded as a black box ; X i ) R q denotes the non-observed random data which is assumed independent and identically distributed iid); d i ) R q 2 denotes the observed variables related to the experimental conditions; U i ) R p denotes the measurement errors which are assumed iid Besides, the variables X i ) and U i ) are assumed to be independent
2 Int Statistical Inst: Proc 58th World Statistical Congress, 2, Dublin Session STS2) p98 To solve this inverse problem we choose a Bayesian framework and we assume the following priori distributions for the random variables X i and U i : X i m, C N q m, C); U i N p, R), i n), which allows us to take into account the experts prior knowledge as well as to reduce the problem of identifiability Moreover, assuming a known R, the prior distributions of m and C can be defined as follows : m C N q µ, C/a), with a an hyperparameter to be specified; C IW q Λ, ν) M q q where IW denotes a Inverse-Wishart distribution Typically, we want to estimate the posterior distribution of θ = m, C) knowing the data Y = Y,, Y n ), which caracterizes the distribution of X i We choose a Gibbs sampling to approximate the Bayesian posterior distributions In order to define a Gibbs sampler, noting X = X,, X n ) and Y = Y,, Y n ), we calculate the full conditional posterior distribution for each unknown quantity m, C, X) : a m C, Y, X, ρ N n + a µ + C m, Y, X, ρ IW Λ + { X Y, m, C, ρ exp 2 2) n n + a X, C ) n + a n ) m X i )m X i ) + am µ)m µ) ν + n + i= n i= [ ]} X i m) C X i m) + Y i HX i, d i )) R Y i HX i, d i )) with ρ the set of hyperparameters As the last conditional distribution 2) of X is under a complicated and strange form, it is necessary to apply a numerical method for simulation, Metropolis-Hastings for example 2 New kriging version of the model In general, the CPU-time consuming for the function H is very large for the reason of complicated code of H) It is desirable to limit the number of calls to the function H Here we propose a kriging approximation method where Ĥ H is regarded as the realisation of a Gaussian process H To apply this method, we choose a bounded hypercube Ω R Q Q = q + q 2 ) and generate a set of N max points D = {z,, z Nmax } Ω with z i = x i, d i ), applying a Latin Hypercube Sampling LHS) - maximin strategy, called a design Our approximation is restricted to Ω and the number of calls to H is limited to N max, noted by H D = {Hz ),, Hz Nmax )} The predictor Ĥ is calculated as a conditional experance for any point z Ω Ĥz ) = EHz ) H D = H D ), and its conditional variance denoted by MSEz ) Mean Squared Error) can be seen as a mesure of the prediction accuracy Following this idea, each Y i can also be seen as a realisation of a new process
3 Int Statistical Inst: Proc 58th World Statistical Congress, 2, Dublin Session STS2) p982 Y i which corresponds with the Gaussian process H, we rewrite the current model: 3) Y i = ĤX i, d i ) + H Ĥ)X i, d i ) + U i }{{} = ĤX i, d i ) + V i X i, d i ) with the new error term V i X i, d i ) combining two types of variance : the R of the old error U i and the MSE of the kriging method We regard especially the whole sample Y = Y,, Y n ), which permets us to consider the correlation between the predictions ĤZ i) and ĤZ j) Thus model 3) can be written assuming that p = 2 for simplicity): 4) Y = Y Y n Y 2 Y 2 n = Ĥ Z ) Ĥ nz n ) Ĥ 2Z + ) ĤnZ 2 n ) V Z ) V n Z n ) V 2 Z ) V 2 n Z n ) = ĤZ) + V Z) We deduce its distribution given the variable Z = Z,, Z Nmax ) and the function values H D of the design: 5) Y Z, H D N ĤZ), R + MSEZ)), where R is a diagonal variance matrix: R R 6) R = R 22 R 22, n times n times with R ii the i th diagonal component of R; and MSEZ) a block diagonal matrix composed by the MSE i Z) which is the variance-covariance matrix of H i Z): MSE Z) n times MSEZ) = MSE 2 Z) n times The conditional distribution of the vector X R p n knowing Y can be specified : 7) R 2 X Y, m, C, ρ, d, H D Ω + MSEZ) exp 2 { 2 ), ) ) ) Y ĤZ ), Y n ĤZ n) R + MSEZ) n i= [ ] X i m) C X i m) ) Y ĤZ ) ) Y n ĤZ n) }
4 Int Statistical Inst: Proc 58th World Statistical Congress, 2, Dublin Session STS2) p983 3 Hybrid MCMC Algorithm We explicit the Gibbs algorithm as follows: Given m [r], C [r], X [r] ) for r =,, 2,, generate C [r+] IW Λ + ) n i= m[r] X [r] i )m [r] X [r] i ) + am [r] µ)m [r] µ), ν + n + ) 2 m [r+] N a n+a µ + n n+a X[r], C[r+] n+a 3 X [r+]? The simulation of X [r+] consists of l iterations of Metropolis-Hastings algorithm By consequence, our Gibbs sampling algorithm becomes a so-called hybrid MCMC algorithm, following Tierney 994), which simultaneously utilizes both Gibbs sampling steps and Metropolis-Hastings steps In detail, Let X = X,,, X n, ) = X [r] For s =,, l, updating X [r] component by component for i =,, n): Generate X i,s J X i,s ) where J is a proposal distribution 2 Let 8) αx i,s, X πĥ X s Y, θ [r+], ρ, d, H D ) JX i,s X i,s ) ) i,s ) = min πĥx s Y, θ [r+], ρ, d, H D ) J X i,s X i,s ),, where X s = X,s,, X i,s, Xi,s, X i+,s,, X n,s ) X s = X,s,, X i,s, X i,s, X i+,s,, X n,s ) 3 Accept X i,s = { Xi,s with probability αx i,s, X i,s ), X i,s otherwise ) 4 Renovation X s = X,s,, X i,s, X i+,s,, X n,s X [r+] = X l In Step 2 of the MH algorithm, we consider a gaussian proposal distributions J = N m [r+], C [r+] )
5 Int Statistical Inst: Proc 58th World Statistical Congress, 2, Dublin Session STS2) p984 4 Construction of an adaptive design An important point for a kriging approximation is the choice of the design of experience DoE) In order to fill the whole field, we use a classic strategy called Latin Hypercube Sampling LHS) - maximin But this general method is not always perfect, sometimes the design points are relatively far from the true value of X and sometimes the MSE may explode at the edge of the experimental field We propose to construct a design adaptively by providing the additional information sequentially The substitution is as follows : Fix N max as calculation budget and a proportion p of points 2 Choose a hypercubic domain Ω where proxy is valid 3 Build a design LHS-maximin D with p N max points in Ω 4 Decide whether the design is satisfactory quality criterion) If it is, we call the kriging predictor Ĥ and we run the Gibbs algorithm M-H) 5 If it isn t, we add the other p) N max points sequentially to the indicated area according to an adaptive procedure to improve the quality of design The adaptive scheme requires many choices: the experimental field Ω; the proportion p of points selected for the design LHS-maximin; a criterion to measure the quality of a design; the adaptive strategy of adding the points, etc Small experience : with different number N max of points The number N max of points of the design D is very important If we increase this number, we improve the accuracy of the kriging approximation But on the other hand, the computation requires more time In the following numerical experiments, we consider three different cases : N max =, 3, 5 The Figure gives us an illustration : when we increase the points of the design D from to 2, there is an obvious decrease of the MSE 25 2 Quantile 5 Median Quantile 95 5 MSE Number of points of Design Figure : MSE with an increasing number N max of the points of D Now we consider three LHS-maximin designs with points, 3 points and 5 points and we compare the posterior distributions of two parameters m and C by using these three kriging methods, based on a sample of size, drawn from the simulated Monte Carlo chain after the burn-in period We can notice a great difference when we apply the -point kriging
6 Int Statistical Inst: Proc 58th World Statistical Congress, 2, Dublin Session STS2) p Distribution of m Posterior kriging Posterior kriging 3 Posterior kriging Distribution of m2 Posterior kriging Posterior kriging 3 Posterior kriging Figure 2: m Figure 3: m Distribution of C Posterior kriging Posterior kriging 3 Posterior kriging Distribution of C22 Posterior kriging Posterior kriging 3 Posterior kriging Figure 4: C Figure 5: C 22 How to choose adaptively the N max points of design? as follows : Here we give a procedure well detailed Fix an experimental field Ω and generate a design D of p N max p = 9% for example) points in Ω according to the LHS - maximin strategy 2 Divide the field Ω densely enough 5 respectively for x,x 2 and d for example) 3 At each grid point x, x 2, d) = x, d), we calculate its criterion value C D x, d) with two possible types : a) weighted criterion : where max MSEx, x,d) E d)α πx) α πx) [ + x µ) + ) ] ν+ 2 a ) Λ x µ), and MSEx, d) can be obtained easily with the package DACE; b) weighted criterion taking into account the y i at iteration k) : max MSEx, x,d) E d)α πx y i, θ [k] ) α i where πx y i, θ [k] ) has to be updated at each iteration of MCMC
7 Int Statistical Inst: Proc 58th World Statistical Congress, 2, Dublin Session STS2) p986 4 Complete the design sequentially with p) N max additional points which maximise the criterion quantity For the moment, we use only the first criterion by logarithm, which means that C D x, d) = α logmsex, d)) ν + α) log [ + x µ) + ) ] 2 a ) Λ x µ) where α could be chosen between and If we take p = 9%, N max =, with different α we obtain the following graphs Figure 6, 7 and 8, where Figure 6 represents the design D with 9 points generated by LHS - Maximin plus additional points with the biggest MSE value, and in Figure 7 and 8, we add points according to the MSE value as well as the prior distribution of X where the prior mean µ is all supposed [4, 48], but the weight α = 8 for Figure 7 and α = 5 for Figure 8 The strategy of adding points does allow us to diminish the kriging uncertainty We can compare the final result : the posterior distributions of the mean parameter m Figure 9 and ) in three cases and we use a 5-point-kriging result as a reference We find that in our example, with α = we only consider the MSE impact), the simulation results satisfy us the most 54 Design: 9% LHS plus % points,, Grille 54 Design: 9% LHS plus % points,8, Grille 54 Design: 9% LHS plus % points,5, Grille Figure 6: α = Figure 7: α = 8 Figure 8: α = Distribution of m with different alpha sequentlly, points sequentlly, points 8 sequentlly, points 5 5 points 25 2 Distribution of m2 with different alpha sequentlly, points sequentlly, points 8 sequentlly, points 5 5 points Figure 9: m Figure : m 2 Why did we choose p = 9%? In fact, in order to make a best choice of p, we repeat the experiment with different p and we obtain the following graph Figure ) Obviously, p = 8% 9% seems to be a good solution Remark : When we add the p) N max points to D, we have to distinguish their physical positions to avoid adding all the points in the same area So we construct the DoE sequentially by adding one new point and recalculating the criterion C D each time
8 Int Statistical Inst: Proc 58th World Statistical Congress, 2, Dublin Session STS2) p987 3 MSE with Percent) LHS + Percent points MSE 25 2 MSE Percent %) of points to add Figure : MSE with an increasing number of points to add To illustrate the advantage of this adaptive idea, we compare the experimental results of the - point-adaptive-kriging methods by always using a 5-point-kriging method as a reference Figure 2, 3, 4 and 5) There is a great improvement when we apply the adaptive kriging method with points, especially for m 2 and C Distribution of m time, points sequentlly, points classic, points time, 5 points time, points sequentlly, points classic, points time, 5 points Distribution of m Figure 2: m Figure 3: m Distribution of C time, points sequentlly, points classic, points time, 5 points 4 2 Distribution of C22 time, points sequentlly, points classic, points time, 5 points Figure 4: C Figure 5: C 22 REFERENCES RÉFERENCES) [] Robert C P and Casella G 24) Monte Carlo Statistical Methods Second Edition New York: Springer [2] Gelman A, Carlin J B, Stern HS and Rubin DB 24) Bayesian Data Analysis Second Edition Chapman & Hall/Crc [3] Celeux G, Grimaud A, Lefebvre Y and De Rocquigny E 29) Identifying variability in multivariate systems through linearised inverse methods Inverse Problems In Science & Engineering, à aparaître [4] Barbillon P, Celeux G, Grimaud A, Lefebvre Y and De Rocquigny E 29) Non linear methods for inverse statistical problems Rapport de research INRIA
9 Int Statistical Inst: Proc 58th World Statistical Congress, 2, Dublin Session STS2) p988 [5] Bettinger R 29) Inversion d un systme par krigeage Thesis, Nice-Sophia Antipolis University [6] Dubourg V, Deheeger F 2) Une alternative la substitution pour les mta-modles en analyse de fiabilit Journes Nationales de la Fiabilit, Toulouse [7] Picheny V, Ginsbourger D, Roustant O, Haftka RT, Kim N-H Adaptive Designs of Experiments for Accurate Approximation of a Target Region [8] Williams B, Santner T and Notz W 2) Sequential design of computer experiments to minimize integrated response functions Statistica Sinica [9] Scheidt C 26) Analyse statistique d expriences simules: Modlisation adaptative de rponses non-rgulires par krigeage et plans d expriences Thesis, Louis Pasteur University, Strasbourg [] Leonardo S Bastos and Anthony O Hagan 29) Diagnostics for Gaussian Process Emulators University of Sheffield
eqr094: Hierarchical MCMC for Bayesian System Reliability
eqr094: Hierarchical MCMC for Bayesian System Reliability Alyson G. Wilson Statistical Sciences Group, Los Alamos National Laboratory P.O. Box 1663, MS F600 Los Alamos, NM 87545 USA Phone: 505-667-9167
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As
More informationMarkov Chain Monte Carlo
Markov Chain Monte Carlo Recall: To compute the expectation E ( h(y ) ) we use the approximation E(h(Y )) 1 n n h(y ) t=1 with Y (1),..., Y (n) h(y). Thus our aim is to sample Y (1),..., Y (n) from f(y).
More informationAdaptive Monte Carlo methods
Adaptive Monte Carlo methods Jean-Michel Marin Projet Select, INRIA Futurs, Université Paris-Sud joint with Randal Douc (École Polytechnique), Arnaud Guillin (Université de Marseille) and Christian Robert
More informationSurveying the Characteristics of Population Monte Carlo
International Research Journal of Applied and Basic Sciences 2013 Available online at www.irjabs.com ISSN 2251-838X / Vol, 7 (9): 522-527 Science Explorer Publications Surveying the Characteristics of
More informationIntroduction to emulators - the what, the when, the why
School of Earth and Environment INSTITUTE FOR CLIMATE & ATMOSPHERIC SCIENCE Introduction to emulators - the what, the when, the why Dr Lindsay Lee 1 What is a simulator? A simulator is a computer code
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationBayesian spatial hierarchical modeling for temperature extremes
Bayesian spatial hierarchical modeling for temperature extremes Indriati Bisono Dr. Andrew Robinson Dr. Aloke Phatak Mathematics and Statistics Department The University of Melbourne Maths, Informatics
More informationSupplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements
Supplement to A Hierarchical Approach for Fitting Curves to Response Time Measurements Jeffrey N. Rouder Francis Tuerlinckx Paul L. Speckman Jun Lu & Pablo Gomez May 4 008 1 The Weibull regression model
More informationBayesian Prediction of Code Output. ASA Albuquerque Chapter Short Course October 2014
Bayesian Prediction of Code Output ASA Albuquerque Chapter Short Course October 2014 Abstract This presentation summarizes Bayesian prediction methodology for the Gaussian process (GP) surrogate representation
More informationBayesian data analysis in practice: Three simple examples
Bayesian data analysis in practice: Three simple examples Martin P. Tingley Introduction These notes cover three examples I presented at Climatea on 5 October 0. Matlab code is available by request to
More informationA Search and Jump Algorithm for Markov Chain Monte Carlo Sampling. Christopher Jennison. Adriana Ibrahim. Seminar at University of Kuwait
A Search and Jump Algorithm for Markov Chain Monte Carlo Sampling Christopher Jennison Department of Mathematical Sciences, University of Bath, UK http://people.bath.ac.uk/mascj Adriana Ibrahim Institute
More informationST 740: Markov Chain Monte Carlo
ST 740: Markov Chain Monte Carlo Alyson Wilson Department of Statistics North Carolina State University October 14, 2012 A. Wilson (NCSU Stsatistics) MCMC October 14, 2012 1 / 20 Convergence Diagnostics:
More informationMEASUREMENT UNCERTAINTY AND SUMMARISING MONTE CARLO SAMPLES
XX IMEKO World Congress Metrology for Green Growth September 9 14, 212, Busan, Republic of Korea MEASUREMENT UNCERTAINTY AND SUMMARISING MONTE CARLO SAMPLES A B Forbes National Physical Laboratory, Teddington,
More informationMarkov chain Monte Carlo
Markov chain Monte Carlo Karl Oskar Ekvall Galin L. Jones University of Minnesota March 12, 2019 Abstract Practically relevant statistical models often give rise to probability distributions that are analytically
More informationAn introduction to Bayesian statistics and model calibration and a host of related topics
An introduction to Bayesian statistics and model calibration and a host of related topics Derek Bingham Statistics and Actuarial Science Simon Fraser University Cast of thousands have participated in the
More informationF denotes cumulative density. denotes probability density function; (.)
BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models
More informationGaussian Processes for Computer Experiments
Gaussian Processes for Computer Experiments Jeremy Oakley School of Mathematics and Statistics, University of Sheffield www.jeremy-oakley.staff.shef.ac.uk 1 / 43 Computer models Computer model represented
More informationBagging During Markov Chain Monte Carlo for Smoother Predictions
Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods
More informationBayesian Inference. Chapter 1. Introduction and basic concepts
Bayesian Inference Chapter 1. Introduction and basic concepts M. Concepción Ausín Department of Statistics Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master
More informationKobe University Repository : Kernel
Kobe University Repository : Kernel タイトル Title 著者 Author(s) 掲載誌 巻号 ページ Citation 刊行日 Issue date 資源タイプ Resource Type 版区分 Resource Version 権利 Rights DOI URL Note on the Sampling Distribution for the Metropolis-
More informationSequential approaches to reliability estimation and optimization based on kriging
Sequential approaches to reliability estimation and optimization based on kriging Rodolphe Le Riche1,2 and Olivier Roustant2 1 CNRS ; 2 Ecole des Mines de Saint-Etienne JSO 2012, ONERA Palaiseau 1 Intro
More informationMH I. Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution
MH I Metropolis-Hastings (MH) algorithm is the most popular method of getting dependent samples from a probability distribution a lot of Bayesian mehods rely on the use of MH algorithm and it s famous
More informationLecture 8: The Metropolis-Hastings Algorithm
30.10.2008 What we have seen last time: Gibbs sampler Key idea: Generate a Markov chain by updating the component of (X 1,..., X p ) in turn by drawing from the full conditionals: X (t) j Two drawbacks:
More informationMonte Carlo Methods. Leon Gu CSD, CMU
Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte
More informationHastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model
UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov
More informationOnline Appendix to: Crises and Recoveries in an Empirical Model of. Consumption Disasters
Online Appendix to: Crises and Recoveries in an Empirical Model of Consumption Disasters Emi Nakamura Columbia University Robert Barro Harvard University Jón Steinsson Columbia University José Ursúa Harvard
More informationMarkov Chain Monte Carlo methods
Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning
More informationKernel adaptive Sequential Monte Carlo
Kernel adaptive Sequential Monte Carlo Ingmar Schuster (Paris Dauphine) Heiko Strathmann (University College London) Brooks Paige (Oxford) Dino Sejdinovic (Oxford) December 7, 2015 1 / 36 Section 1 Outline
More informationStatistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling
1 / 27 Statistical Machine Learning Lecture 8: Markov Chain Monte Carlo Sampling Melih Kandemir Özyeğin University, İstanbul, Turkey 2 / 27 Monte Carlo Integration The big question : Evaluate E p(z) [f(z)]
More informationEfficient MCMC Sampling for Hierarchical Bayesian Inverse Problems
Efficient MCMC Sampling for Hierarchical Bayesian Inverse Problems Andrew Brown 1,2, Arvind Saibaba 3, Sarah Vallélian 2,3 CCNS Transition Workshop SAMSI May 5, 2016 Supported by SAMSI Visiting Research
More informationMarkov Switching Regular Vine Copulas
Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS057) p.5304 Markov Switching Regular Vine Copulas Stöber, Jakob and Czado, Claudia Lehrstuhl für Mathematische Statistik,
More informationSTAT 518 Intro Student Presentation
STAT 518 Intro Student Presentation Wen Wei Loh April 11, 2013 Title of paper Radford M. Neal [1999] Bayesian Statistics, 6: 475-501, 1999 What the paper is about Regression and Classification Flexible
More information16 : Approximate Inference: Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models 10-708, Spring 2017 16 : Approximate Inference: Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Yuan Yang, Chao-Ming Yen 1 Introduction As the target distribution
More informationCTDL-Positive Stable Frailty Model
CTDL-Positive Stable Frailty Model M. Blagojevic 1, G. MacKenzie 2 1 Department of Mathematics, Keele University, Staffordshire ST5 5BG,UK and 2 Centre of Biostatistics, University of Limerick, Ireland
More informationBayesian sensitivity analysis of a cardiac cell model using a Gaussian process emulator Supporting information
Bayesian sensitivity analysis of a cardiac cell model using a Gaussian process emulator Supporting information E T Y Chang 1,2, M Strong 3 R H Clayton 1,2, 1 Insigneo Institute for in-silico Medicine,
More informationMCMC Sampling for Bayesian Inference using L1-type Priors
MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling
More informationBayesian Methods in Multilevel Regression
Bayesian Methods in Multilevel Regression Joop Hox MuLOG, 15 september 2000 mcmc What is Statistics?! Statistics is about uncertainty To err is human, to forgive divine, but to include errors in your design
More informationSlice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method
Slice Sampling with Adaptive Multivariate Steps: The Shrinking-Rank Method Madeleine B. Thompson Radford M. Neal Abstract The shrinking rank method is a variation of slice sampling that is efficient at
More informationMarkov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation. Luke Tierney Department of Statistics & Actuarial Science University of Iowa
Markov Chain Monte Carlo Using the Ratio-of-Uniforms Transformation Luke Tierney Department of Statistics & Actuarial Science University of Iowa Basic Ratio of Uniforms Method Introduced by Kinderman and
More informationComparing Priors in Bayesian Logistic Regression for Sensorial Classification of Rice
Comparing Priors in Bayesian Logistic Regression for Sensorial Classification of Rice INTRODUCTION Visual characteristics of rice grain are important in determination of quality and price of cooked rice.
More informationBayesian Monte Carlo Filtering for Stochastic Volatility Models
Bayesian Monte Carlo Filtering for Stochastic Volatility Models Roberto Casarin CEREMADE University Paris IX (Dauphine) and Dept. of Economics University Ca Foscari, Venice Abstract Modelling of the financial
More informationHierarchical models. Dr. Jarad Niemi. August 31, Iowa State University. Jarad Niemi (Iowa State) Hierarchical models August 31, / 31
Hierarchical models Dr. Jarad Niemi Iowa State University August 31, 2017 Jarad Niemi (Iowa State) Hierarchical models August 31, 2017 1 / 31 Normal hierarchical model Let Y ig N(θ g, σ 2 ) for i = 1,...,
More informationBayesian Inference for the Multivariate Normal
Bayesian Inference for the Multivariate Normal Will Penny Wellcome Trust Centre for Neuroimaging, University College, London WC1N 3BG, UK. November 28, 2014 Abstract Bayesian inference for the multivariate
More informationSimulation of truncated normal variables. Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris
Simulation of truncated normal variables Christian P. Robert LSTA, Université Pierre et Marie Curie, Paris Abstract arxiv:0907.4010v1 [stat.co] 23 Jul 2009 We provide in this paper simulation algorithms
More informationMarkov chain Monte Carlo methods in atmospheric remote sensing
1 / 45 Markov chain Monte Carlo methods in atmospheric remote sensing Johanna Tamminen johanna.tamminen@fmi.fi ESA Summer School on Earth System Monitoring and Modeling July 3 Aug 11, 212, Frascati July,
More informationEfficiency and Reliability of Bayesian Calibration of Energy Supply System Models
Efficiency and Reliability of Bayesian Calibration of Energy Supply System Models Kathrin Menberg 1,2, Yeonsook Heo 2, Ruchi Choudhary 1 1 University of Cambridge, Department of Engineering, Cambridge,
More informationMaking rating curves - the Bayesian approach
Making rating curves - the Bayesian approach Rating curves what is wanted? A best estimate of the relationship between stage and discharge at a given place in a river. The relationship should be on the
More information17 : Markov Chain Monte Carlo
10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo
More informationStatistical Inference for Stochastic Epidemic Models
Statistical Inference for Stochastic Epidemic Models George Streftaris 1 and Gavin J. Gibson 1 1 Department of Actuarial Mathematics & Statistics, Heriot-Watt University, Riccarton, Edinburgh EH14 4AS,
More informationKullback-Leibler Designs
Kullback-Leibler Designs Astrid JOURDAN Jessica FRANCO Contents Contents Introduction Kullback-Leibler divergence Estimation by a Monte-Carlo method Design comparison Conclusion 2 Introduction Computer
More informationGibbs Sampling in Linear Models #2
Gibbs Sampling in Linear Models #2 Econ 690 Purdue University Outline 1 Linear Regression Model with a Changepoint Example with Temperature Data 2 The Seemingly Unrelated Regressions Model 3 Gibbs sampling
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationMultivariate Normal & Wishart
Multivariate Normal & Wishart Hoff Chapter 7 October 21, 2010 Reading Comprehesion Example Twenty-two children are given a reading comprehsion test before and after receiving a particular instruction method.
More informationA quick introduction to Markov chains and Markov chain Monte Carlo (revised version)
A quick introduction to Markov chains and Markov chain Monte Carlo (revised version) Rasmus Waagepetersen Institute of Mathematical Sciences Aalborg University 1 Introduction These notes are intended to
More informationA Practical Implementation of the Gibbs Sampler for Mixture of Distributions: Application to the Determination of Specifications in Food Industry
A Practical Implementation of the Gibbs Sampler for Mixture of Distributions: Application to the Determination of Specifications in Food Industry Julien Cornebise 1, Myriam Maumy 2, and Philippe Girard
More informationAdvances and Applications in Perfect Sampling
and Applications in Perfect Sampling Ph.D. Dissertation Defense Ulrike Schneider advisor: Jem Corcoran May 8, 2003 Department of Applied Mathematics University of Colorado Outline Introduction (1) MCMC
More information(5) Multi-parameter models - Gibbs sampling. ST440/540: Applied Bayesian Analysis
Summarizing a posterior Given the data and prior the posterior is determined Summarizing the posterior gives parameter estimates, intervals, and hypothesis tests Most of these computations are integrals
More informationSAMPLING ALGORITHMS. In general. Inference in Bayesian models
SAMPLING ALGORITHMS SAMPLING ALGORITHMS In general A sampling algorithm is an algorithm that outputs samples x 1, x 2,... from a given distribution P or density p. Sampling algorithms can for example be
More informationStat 451 Lecture Notes Markov Chain Monte Carlo. Ryan Martin UIC
Stat 451 Lecture Notes 07 12 Markov Chain Monte Carlo Ryan Martin UIC www.math.uic.edu/~rgmartin 1 Based on Chapters 8 9 in Givens & Hoeting, Chapters 25 27 in Lange 2 Updated: April 4, 2016 1 / 42 Outline
More informationIntroduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation. EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016
Introduction to Bayesian Statistics and Markov Chain Monte Carlo Estimation EPSY 905: Multivariate Analysis Spring 2016 Lecture #10: April 6, 2016 EPSY 905: Intro to Bayesian and MCMC Today s Class An
More informationAn Application of MCMC Methods and of Mixtures of Laws for the Determination of the Purity of a Product.
An Application of MCMC Methods and of Mixtures of Laws for the Determination of the Purity of a Product. Julien Cornebise & Myriam Maumy E.S.I-E-A LSTA 72 avenue Maurice Thorez, Université Pierre et Marie
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More informationBayesian model selection: methodology, computation and applications
Bayesian model selection: methodology, computation and applications David Nott Department of Statistics and Applied Probability National University of Singapore Statistical Genomics Summer School Program
More informationAn introduction to Sequential Monte Carlo
An introduction to Sequential Monte Carlo Thang Bui Jes Frellsen Department of Engineering University of Cambridge Research and Communication Club 6 February 2014 1 Sequential Monte Carlo (SMC) methods
More informationBayesian Defect Signal Analysis
Electrical and Computer Engineering Publications Electrical and Computer Engineering 26 Bayesian Defect Signal Analysis Aleksandar Dogandžić Iowa State University, ald@iastate.edu Benhong Zhang Iowa State
More informationMulti-fidelity sensitivity analysis
Multi-fidelity sensitivity analysis Loic Le Gratiet 1,2, Claire Cannamela 2 & Bertrand Iooss 3 1 Université Denis-Diderot, Paris, France 2 CEA, DAM, DIF, F-91297 Arpajon, France 3 EDF R&D, 6 quai Watier,
More informationMCMC algorithms for fitting Bayesian models
MCMC algorithms for fitting Bayesian models p. 1/1 MCMC algorithms for fitting Bayesian models Sudipto Banerjee sudiptob@biostat.umn.edu University of Minnesota MCMC algorithms for fitting Bayesian models
More informationPrinciples of Bayesian Inference
Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters
More informationBayesian model selection in graphs by using BDgraph package
Bayesian model selection in graphs by using BDgraph package A. Mohammadi and E. Wit March 26, 2013 MOTIVATION Flow cytometry data with 11 proteins from Sachs et al. (2005) RESULT FOR CELL SIGNALING DATA
More informationSTAT 425: Introduction to Bayesian Analysis
STAT 425: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 2017 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 2) Fall 2017 1 / 19 Part 2: Markov chain Monte
More informationSummary of Talk Background to Multilevel modelling project. What is complex level 1 variation? Tutorial dataset. Method 1 : Inverse Wishart proposals.
Modelling the Variance : MCMC methods for tting multilevel models with complex level 1 variation and extensions to constrained variance matrices By Dr William Browne Centre for Multilevel Modelling Institute
More informationQuantile POD for Hit-Miss Data
Quantile POD for Hit-Miss Data Yew-Meng Koh a and William Q. Meeker a a Center for Nondestructive Evaluation, Department of Statistics, Iowa State niversity, Ames, Iowa 50010 Abstract. Probability of detection
More informationBayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference
1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE
More informationParameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1
Parameter Estimation William H. Jefferys University of Texas at Austin bill@bayesrules.net Parameter Estimation 7/26/05 1 Elements of Inference Inference problems contain two indispensable elements: Data
More informationIntroduction to Bayesian methods in inverse problems
Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction
More informationBayesian Linear Regression
Bayesian Linear Regression Sudipto Banerjee 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. September 15, 2010 1 Linear regression models: a Bayesian perspective
More informationBayesian inference for multivariate skew-normal and skew-t distributions
Bayesian inference for multivariate skew-normal and skew-t distributions Brunero Liseo Sapienza Università di Roma Banff, May 2013 Outline Joint research with Antonio Parisi (Roma Tor Vergata) 1. Inferential
More informationOnline appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US
Online appendix to On the stability of the excess sensitivity of aggregate consumption growth in the US Gerdie Everaert 1, Lorenzo Pozzi 2, and Ruben Schoonackers 3 1 Ghent University & SHERPPA 2 Erasmus
More informationPOSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL
COMMUN. STATIST. THEORY METH., 30(5), 855 874 (2001) POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL Hisashi Tanizaki and Xingyuan Zhang Faculty of Economics, Kobe University, Kobe 657-8501,
More informationSpatial Statistics with Image Analysis. Outline. A Statistical Approach. Johan Lindström 1. Lund October 6, 2016
Spatial Statistics Spatial Examples More Spatial Statistics with Image Analysis Johan Lindström 1 1 Mathematical Statistics Centre for Mathematical Sciences Lund University Lund October 6, 2016 Johan Lindström
More informationMonte Carlo Integration using Importance Sampling and Gibbs Sampling
Monte Carlo Integration using Importance Sampling and Gibbs Sampling Wolfgang Hörmann and Josef Leydold Department of Statistics University of Economics and Business Administration Vienna Austria hormannw@boun.edu.tr
More informationBayesian Gaussian Process Regression
Bayesian Gaussian Process Regression STAT8810, Fall 2017 M.T. Pratola October 7, 2017 Today Bayesian Gaussian Process Regression Bayesian GP Regression Recall we had observations from our expensive simulator,
More informationDAG models and Markov Chain Monte Carlo methods a short overview
DAG models and Markov Chain Monte Carlo methods a short overview Søren Højsgaard Institute of Genetics and Biotechnology University of Aarhus August 18, 2008 Printed: August 18, 2008 File: DAGMC-Lecture.tex
More informationLabor-Supply Shifts and Economic Fluctuations. Technical Appendix
Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January
More informationOverlapping block proposals for latent Gaussian Markov random fields
NORGES TEKNISK-NATURVITENSKAPELIGE UNIVERSITET Overlapping block proposals for latent Gaussian Markov random fields by Ingelin Steinsland and Håvard Rue PREPRINT STATISTICS NO. 8/3 NORWEGIAN UNIVERSITY
More informationComputer Practical: Metropolis-Hastings-based MCMC
Computer Practical: Metropolis-Hastings-based MCMC Andrea Arnold and Franz Hamilton North Carolina State University July 30, 2016 A. Arnold / F. Hamilton (NCSU) MH-based MCMC July 30, 2016 1 / 19 Markov
More informationMustafa H. Tongarlak Bruce E. Ankenman Barry L. Nelson
Proceedings of the 0 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. RELATIVE ERROR STOCHASTIC KRIGING Mustafa H. Tongarlak Bruce E. Ankenman Barry L.
More informationTimevarying VARs. Wouter J. Den Haan London School of Economics. c Wouter J. Den Haan
Timevarying VARs Wouter J. Den Haan London School of Economics c Wouter J. Den Haan Time-Varying VARs Gibbs-Sampler general idea probit regression application (Inverted Wishart distribution Drawing from
More informationMonetary and Exchange Rate Policy Under Remittance Fluctuations. Technical Appendix and Additional Results
Monetary and Exchange Rate Policy Under Remittance Fluctuations Technical Appendix and Additional Results Federico Mandelman February In this appendix, I provide technical details on the Bayesian estimation.
More informationThe two subset recurrent property of Markov chains
The two subset recurrent property of Markov chains Lars Holden, Norsk Regnesentral Abstract This paper proposes a new type of recurrence where we divide the Markov chains into intervals that start when
More informationMARKOV CHAIN MONTE CARLO
MARKOV CHAIN MONTE CARLO RYAN WANG Abstract. This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with
More informationSensitivity analysis and calibration of a global aerosol model
School of Earth and Environment Sensitivity analysis and calibration of a global aerosol model Lindsay Lee l.a.lee@leeds.ac.uk Ken Carslaw 1, Carly Reddington 1, Kirsty Pringle 1, Jeff Pierce 3, Graham
More informationMaking the Most of What We Have: A Practical Application of Multidimensional Item Response Theory in Test Scoring
Journal of Educational and Behavioral Statistics Fall 2005, Vol. 30, No. 3, pp. 295 311 Making the Most of What We Have: A Practical Application of Multidimensional Item Response Theory in Test Scoring
More informationMonte Carlo methods for sampling-based Stochastic Optimization
Monte Carlo methods for sampling-based Stochastic Optimization Gersende FORT LTCI CNRS & Telecom ParisTech Paris, France Joint works with B. Jourdain, T. Lelièvre, G. Stoltz from ENPC and E. Kuhn from
More informationBayesian Image Segmentation Using MRF s Combined with Hierarchical Prior Models
Bayesian Image Segmentation Using MRF s Combined with Hierarchical Prior Models Kohta Aoki 1 and Hiroshi Nagahashi 2 1 Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology
More informationMarkov Chain Monte Carlo Algorithms for Gaussian Processes
Markov Chain Monte Carlo Algorithms for Gaussian Processes Michalis K. Titsias, Neil Lawrence and Magnus Rattray School of Computer Science University of Manchester June 8 Outline Gaussian Processes Sampling
More informationLikelihood-free MCMC
Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte
More informationLecture 8: Bayesian Estimation of Parameters in State Space Models
in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space
More information