Technometrics Publication details, including instructions for authors and subscription information:

Size: px
Start display at page:

Download "Technometrics Publication details, including instructions for authors and subscription information:"

Transcription

1 This article was downloaded by: [Texas A&M University Libraries] On: 02 September 204, At: 0:00 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: Registered office: Mortimer House, 37-4 Mortimer Street, London WT 3JH, UK Technometrics Publication details, including instructions for authors and subscription information: Bayesian Uncertainty Quantification for Subsurface Inversion Using a Multiscale Hierarchical Model Anirban Mondal a, Bani Mallick a, Yalchin Efendiev b & Akhil Datta-Gupta c a Department of Statistics, Texas A&M University, College Station, TX (; ) b Department of Mathematics, Texas A&M University, College Station, TX () c Petroleum Engineering Department, Texas A&M University, College Station, TX () Accepted author version posted online: 06 Sep 203.Published online: 24 Jul 204. To cite this article: Anirban Mondal, Bani Mallick, Yalchin Efendiev & Akhil Datta-Gupta (204) Bayesian Uncertainty Quantification for Subsurface Inversion Using a Multiscale Hierarchical Model, Technometrics, 56:3, , DOI: 0.080/ To link to this article: PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the Content ) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at amstat.tandfonline.com/page/terms-and-conditions

2 Supplementary materials for this article are available online. Please go to Bayesian Uncertainty Quantification for Subsurface Inversion Using a Multiscale Hierarchical Model Anirban MONDAL and Bani MALLICK Department of Statistics Texas A&M University College Station, TX (anirban@stat.tamu.edu; bmallick@stat.tamu.edu) Yalchin EFENDIEV Department of Mathematics Texas A&M University College Station, TX (efendiev@math.tamu.edu) Akhil DATTA-GUPTA Petroleum Engineering Department Texas A&M University College Station, TX (datta.gupta@pe.tamu.edu) We consider a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a random field (spatial or temporal). The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from heterogeneous sources and provide a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. The Karhunen-Loeve expansion is used for dimension reduction of the random field. Furthermore, we use a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we show that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. Computational challenges in this construction arise from the need for repeated evaluations of the forward model (e.g., in the context of MCMC) and are compounded by high dimensionality of the posterior. We develop two-stage reversible jump MCMC that has the ability to screen the bad proposals in the first inexpensive stage. Numerical results are presented by analyzing simulated as well as real data from hydrocarbon reservoir. This article has supplementary material available online. KEY WORDS:. INTRODUCTION Mathematical models are studied using computer simulations in almost all areas of applied and computational mathematics. The indirect estimation of model parameters or inputs from observations constitutes an inverse problem. Such problems arise frequently in science and engineering with applications in weather forecasting, climate prediction, chemical kinetics, and oil reservoir forecasting. In practical settings, observations are inevitably noisy and may be limited in number or resolution. Quantifying the uncertainty in inputs or parameters is then essential for predictive modeling and simulation-based decision making. For definiteness, in the following, the focus of this article is on a petroleum reservoir problem. However, we hope that the developed theory, methodology, and the computational tools will be of general interest and value. Reservoir simulation models are widely used by oil and gas companies for production forecasts and for making investment decisions. If it were possible for geoscientists and engineers to know the physical properties like locations of oil and gas, the permeability, the porosity, and the multiphase flow properties at all locations in a reservoir, it would be conceptually Bayesian hierarchical model; Bayesian inverse problems; Karhunen-Loeve expansion; Two-stage reversible jump MCMC. possible to develop a mathematical model that could be used to predict the outcome of any action. This model is usually a set of partial differential equations. These physical properties are also known as model variables or the input variables. If the model variables are known, outcomes (output variables) can be predicted, usually running a numerical reservoir simulator that solves a discretized approximation to those partial differential equations. This is known as the forward problem. Unfortunately, most oil and gas reservoirs are inconveniently buried beneath thousands of feet of overburden. Direct observations of physical properties of the reservoir are available only at a few well locations. Additionally, we have some indirect observations known as the production data, which are typically made at the surface, either at the wellhead or at the distributed 204 American Statistical Association and the American Society for Quality DOI: 0.080/ Color versions of one or more of the figures in the article can be found online at 38

3 382 ANIRBAN MONDAL ET AL. locations. The main intention is to determine the plausible physical properties of the reservoir given these direct and indirect observations. This is an inverse problem and the solution of the inverse problem provides an estimate of the characteristics of the subsurface media, which is usually a spatial or spatiotemporal field. To solve this inverse problem, the mismatch between simulated (from the numerical reservoir simulator) and observed measurements of production data is minimized. This method is known as history matching in petroleum engineering. Classical statistical approaches to inverse problems have used regularization methods to impose wellposedness, and solve the resulting deterministic problems by optimization and other means (Vogel 2002). Here we focus on the Bayesian approach, which contains a natural mechanism for regularization in the form of prior information and can incorporate information from heterogeneous sources and provide a quantitative assessment of uncertainty in the inverse solution (Kaipio and Somersalo 2004; Stuart 200). Indeed, the Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. Learning unknown inputs by using some observed data is known as calibration. The principles of Bayesian calibration for computer codes are set out in Kennedy and O Hagen (200). In this article, we consider the inverse problems whose solutions are unknown functions (say spatial or temporal fields) (Ramsey and Silverman 2005; Tarantola 2005). Estimating fields rather than parameters typically increase the ill-posedness of the inverse problem, since one is recovering an infinitedimensional object from a finite amount of data. Most existing studies explored the value of the field on a finite set of grid points and then employed the Gaussian process or Markov random field priors on them (Lee et al. 2000; Ferreira et al. 2002; Lee et al. 2002). That way the dimension of the posterior is tied to the discretization of the field, and computational methods for similar problems are developed by several authors (Higdon, Swall, and Kern 998; Banerjee et al. 2008). On the contrary, we use dimensional reduction in the Bayesian formulation of inverse problems, and allow the dependence of the dimensionality on both prior and the data. We employ the Karhunen-Loéve (K-L) expansion of this unknown field (Loève 977). The number of terms in the K-L expansion determines how much information is truly required to capture variation among realizations of the unknown field. Usually, this number of terms as well as the parameters in the covariance function are assumed to be known (see Efendiev et al. 2005; Marzouk and Najim 2009). On the other hand, we treat them as additional model unknowns and use reversible jump Metropolis (RJM) algorithm to handle this random dimension situation. Since the parameters of the covariance function are unknown, at each step of the reversible jump MCMC procedure, we have to use the K-L expansion of the covariance function, which is very computationally demanding. Hence, we propose an alternative approach in which we have precomputed the K-L expansion for a given set of the parameters and then use linear interpolation to find the respective eigenvalues and eigenvectors for a proposed new value of the parameters. This linear interpolation makes the computation much faster. Using the matrix perturbation theory, we show that if the interpolating grid is small, the approximated eigenvalues and eigenvectors are very close to the true ones. Furthermore, to obtain physically meaningful results, we incorporate additional information on the unknown field through spatially smoothing priors as well as additional multiscale data. Several methods have been previously introduced to incorporate multiscale data with a primary focus on integrating seismic and well data. These methods include conventional techniques such as cokriging or block-kriging and its variations as described by Behrens (998), Deutsch, Srinivasan, and Mo (996), and Xu et al. (992). Most kriging-based methods require variogram construction that can be difficult because of limited availability of data from the unknown field. We employ a Gaussian process prior for the unknown field and use a hierarchical Bayes model to incorporate multiscale data. In this multiscale Bayesian framework, we show that the inverse problem is well posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. In our model, the likelihood function contains the forward solver equations (several differential equations), which is not explicitly available and very expensive to compute. Hence, instead of the reversible jump MCMC algorithm, we propose two-stage reversible jump MCMC. In this algorithm, the proposals are screened in the first stage using the forward solver in an upscaled coarse-grid, which is inexpensive due to the small dimensions of the coarse-grid. Then, it is passed to the final stage; only it is accepted at the first stage. Thus the two-stage algorithm reduces the computational effort by rejecting the bad proposals at the initial stage. We show that this proposed two-stage reversible jump MCMC satisfies the detailed balance condition. Numerical results are presented for the estimation of twodimensional permeability fields obtained from petroleum reservoirs. The permeability field is characterized by two-point correlation functions with an unknown mean. We assume that the values of the fine-scale permeabilities are known at the wells and the permeability data in a coarse-scale are available. Our numerical results illustrate that the proposed method can adequately predict this permeability field. The article is organized as follows. In Section 2 we formulate the inverse problem and discuss various examples. In Section 3, we discuss the hierarchical Bayes model and formulate the posterior distribution. In Section 4, Section 5, and Section 6 we discuss the Metropolis Hastings, reversible jump MCMC, and twostage reversible jump MCMC technique, respectively, to sample from the posterior. Finally, in Section 7, we present numerical results. Section 8 concludes the article with a brief discussion. 2. FORMULATION This section introduces the forward model and the corresponding inverse problem. As stated in the Introduction, we want to estimate a random field Y (x,ω), x D, and ω, where is a sample space in a probability space (,U,P) with sigma algebra U over, P is a probability measure on U and D R n be a bounded spatial domain. Y is treated as the model variable or the input variable. If Y is known then outcomes (output variables, response) can be predicted, usually by running a numerical simulator that solves a discretized approximation to a system of nonlinear partial differential equations. Different names are used in different fields for this model, for example, in reservoir simulations the nonlinear function that maps

4 BAYESIAN UNCERTAINTY QUANTIFICATION FOR SUBSURFACE INVERSION USING A MULTISCALE HIERARCHICAL MODEL 383 Figure. The figure describes the forward simulator. On the left, a typical permeability spatial field is shown, which is the input for the forward simulator. On the right, the output from the forward simulator is shown, which is the fractional flow (water-cut) versus the pore volume injected (PVI). the input variables Y to the output G(Y ) is called the forward simulator and the concerned modeling problem is called the forward problem. Due to model discrepancy, the mathematical model G may not represent the physical system in the real world perfectly. Moreover, due to presence of measurement error and other sources of uncertainty, the observed output responses (say d) will be different from those that can be produced from this forward model. In an additive model framework, we can relate the observations d to the unknown field Y as d = G(Y ) + ɛ, () where ɛ can be (roughly) viewed as the combined model discrepancy error and measurement error. We assume ɛ MVN(0,σd 2 I). The problem considered here is the inverse of this forward problem, where we want to estimate the model parameter, that is, the random field Y, based on the observations d. A limited number of direct data are available on the spatial field Y (x,w) on a fine-grid which will be denoted as y o. The observed data on the fine-scale spatial field y o is extremely sparse. Furthermore, additional data on Y may be available on a relatively coarser grid, say y c. We desire to solve the inverse problem of estimating Y given the data from the output d, the coarse-scale data y c on the spatial field and the data y o on the fine-scale spatial field. First we discuss our application to reservoir characterization, then we move to the general Bayesian formulation of this problem. 2. Reservoir Characterization Petroleum reservoirs are complex geological formations that exhibit a wide range of physical and chemical heterogeneities. These heterogeneities span over multiple-length scales and are impossible to describe in a deterministic fashion. Geostatistics and, more specifically, stochastic modeling of reservoir heterogeneities are being increasingly considered by reservoir and petroleum engineers for their potential in generating more accurate reservoir models together with realistic measures of spatial uncertainty. The goal of reservoir characterization is to provide a numerical model of reservoir attributes, such as hydraulic conductivities (permeability), storativities (porosity), fluid saturation, etc. These attributes are then used as inputs in the forward model represented by various flow simulators to forecast future reservoir performance and oil recovery potential. In most flow situations, the single most influential input is the permeability spatial field, k in our notation. Permeability is an important concept in porous media flow (such as flow of the underground oil). Physically, permeability arises both from the existence of pores and from the average structure of the connectivity of pores. As permeability takes positive values, we transform Y = log(k) for our modeling convenience. The main available response is the fractional flow or the water-cut data, which is the fraction of water produced in relation to the total production rate in a two-phase oil water flow reservoir and denoted by d. The forward simulator operator G (see Figure ), which maps the water-cut data with the permeability field through a logit transformation, is given by d = logit[g(y )] + ɛ. G is obtained from Darcy s law which contains several partial differential equations. The specification of G is described in Section 7.. We obtain permeability data in different scales. The fine-scale data represent point measurements such as well logs and cores, where as the coarse-scale data can be obtained from seismic traces. Our intention is to infer about the fine-scale permeability field using the data from the output (fractional flow) and the coarse-scale data. In this article, our simulated examples and the practical oil-field example deal with the reservoir characterization, but our method can be easily adopted for other examples. 3. BAYESIAN FRAMEWORK We shall explain the general Bayesian framework to solve the inverse problem to infer about the random field Y from the equation d = logit(g(y )) + ɛ. (2) We have the response data d, some observations on Y at the finescale denoted by y o, and some coarse-scale observations of Y say y c. The Bayesian solution of the inverse problem will be the posterior distribution of Y conditioned on all the observations which is P (Y d,y c,y o ). We express this posterior distribution using Bayes theorem as P (Y d,y c,y o ) P (d Y, y c,y o )P (y c Y, y o )P (y o Y )P (Y ). (3)

5 384 ANIRBAN MONDAL ET AL. We need to specify each of the probabilities on the righthand side of the expression to develop the hierarchical Bayesian model. Therefore, the steps to develop the hierarchical Bayes model will be to specify (i) P (Y ): the prior model for the unknown random field Y where we use the Karhunen-Loéve expansion to parameterize Y. (ii) P (y o Y ): the conditional probability of the fine-scale observation given the field Y. (iii)p (y c Y, y o ): modeling the coarse-scale observation y c conditioning on the fine-scale observations y o and the Y using the upscaling technique. (iv) P (d Y, y c,y o ): the likelihood function, which will be obtained from (2). In following sections, we provide the details of each of these modeling parts. 3. Modeling the Prior Process P(Y) One of the commonly used stochastic descriptions of spatial fields is based on a two-point correlation function of the spatial field. For spatial fields described by a two-point correlation function, it is assumed that R(x,x ) = E [Y (x,ω)y(x,ω)] is known, where E[ ] refers to the expectation (i.e., average over all realizations) and x,x are points in the spatial domain. In applications, the spatial fields are considered to be defined on a discrete grid. In this case, R(x,x ) is a square matrix with N rows and N columns, where N is the number of grid blocks in the domain. For spatial fields described by a two-point correlation function, one can use the Karhunen-Loéve expansion (KLE), following Wong (97), to obtain a spatial field description with possibly fewer degrees of freedom. This is done by representing the spatial field in terms of an optimal L 2 basis. By truncating the expansion, we can represent the spatial matrix by a small number of random parameters. We briefly recall some properties of the KLE. For simplicity, we assume that E[Y (x,ω)] = 0. Suppose Y (x,ω) isa second-order stochastic process with E[Y 2 (x,ω)] <, x D. Given an orthonormal basis { i } in L 2, we can expand Y (x,ω) as a general Fourier series Y (x,ω) = i Y i(ω) i (x), where Y i (ω) = D Y (x,ω) i(x)dx. We are interested in the special L 2 basis { i } that makes the random variables Y i uncorrelated. That is, E(Y i Y j ) = 0 for all i j. The basis functions { i } satisfy E[Y i Y j ] = D i(x)dx D R(x,x ) j (x )dx = 0, i j. Since { i } is a complete basis in L 2, it follows that i (x) are eigenfunctions of R(x,x ): R(x,x ) i (x )dx = λ i i (x), i =, 2,..., (4) D where λ i = E[Yi 2] > 0. Furthermore, we have R(x,x ) = i λ i i (x) i (x ). Denote θ i = Y i / λ i, then θ i satisfy E(θ i ) = 0 and E(θ i θ j ) = δ ij. It follows that Y (x,ω) = λi θ i (ω) i (x), (5) i where i and λ i satisfy (4). We assume that the eigenvalues λ i are ordered as λ λ 2... The expansion (5) is called the Karhunen-Loéve expansion. In the KLE (5), the L 2 basis functions i (x) are deterministic and resolve the spatial dependence of the spatial field. The randomness is represented by the scalar random variables θ i. After the domain D is discretized by a rectangular mesh, the continuous KLE (5) is reduced to finite terms and i (x) are discrete fields. Generally, we only need to keep the leading order terms (quantified by the magnitude of λ i ) and still capture most of the energy of the stochastic process Y (x,ω). For an N KL -term KLE approximation Y NKL = N KL i= λi θ i i, define the energy ratio of the approximation as e(n KL ) = E Y N KL 2 E Y 2 = NKL i= λ i i= λ. (6) i If λ i,i =, 2,..., decay very fast, then the truncated KLE would be a good approximation of the stochastic process in the L 2 sense. There are different types of spatial covariance functions R(x,x ) considered in spatial statistics, for example, spherical, exponential, squared exponential or Gaussian and Matern class covariance functions. In our examples we assume that the unknown spatial field is smooth so that we can use the squared exponential covariance structure, although the method is not restricted to this particular covariance structure. R(x,x )in this case is defined as R(x,x ) = σ 2 exp ( x x 2 2l 2 x 2 x2 2 ), 2l2 2 where l and l 2 are the correlation lengths in each direction and σ 2 is the variance. We reparameterize the spatial field Y by K-L expansion and keep the leading m terms in the KLE. For an m-term, KLE approximation Y m = θ 0 + m λi θ i i, = B(l,l 2,σ 2 )θ, (7) i= where B = [ λ, λ λ m m ] and θ = (θ 0,..., θ m ). Here, B depends only on l, l 2, and σ 2. Consequently we have a parametric representation of the field Y through (l,l 2,σ 2,m,θ 0,θ,...θ m ), and we can evaluate Y if we know these parameter values. First, we develop the model and the computation schemes for a fixed m; afterward extend them for unknown m in Section 7. Therefore using Bayes theorem we can write the posterior P (Y d,y c,y o )asgivenin(3) in terms of this set of parameters as P (θ,l,l 2,σ 2 d,y c,y o ) P (d θ,l,l 2,σ 2,y c,y o )P (y c θ,l,l 2,σ 2,y o ) P (y o θ,l,l 2,σ 2 )P (θ)p (l,l 2 )P (σ 2 ) P (d θ,l,l 2,σ 2 )P (y c θ,l,l 2,σ 2 )P (y o θ,l,l 2,σ 2 ) P (θ)p (l,l 2 )P (σ 2 ). (8) 3.2 Modeling the Fine-Scale Data P(y o Y) The fine-scale observations are obtained at some locations of the field Y and we specify a model P (y o Y ) or P (y o σ, θ, l,l 2 )asy o = y p + ɛ k, where y p is the the fine-scale spatial field at the given well locations x obs obtained from the K-L expansion described in Section 3.. ɛ k is the model error for the K-L approximation. We assume, ɛ k follows a multivariate normal distribution with mean 0 and covariance σk 2 I, that is, y o θ,l,l 2,σ 2,σk 2 MVN(y p,σk 2). The prior for σ k 2 is assumed to be σk 2 InverseGamma(a k,b k ). After integrating out σk 2 we obtain P (y o θ,l,l 2,σ 2 ) Ɣ(a k + N obs /2) [ bk + 2 (y o y p ) (y o y p ) ] (a k +N obs /2), (9)

6 BAYESIAN UNCERTAINTY QUANTIFICATION FOR SUBSURFACE INVERSION USING A MULTISCALE HIERARCHICAL MODEL 385 where N obs is the number of observations of the fine-scale permeability field. 3.3 Modeling P(y c Y, y o ) Through Upscaling In many cases the coarse-scale data are readily available that contain important information to reduce the uncertainty in the estimation of the fine-scale spatial field. Moreover, solving the forward problem in a coarse-grid is always much faster and we exploit it in our multistage MCMC algorithm. The upscaling procedure is a way to link the coarse and the fine-scale data. The simplest way to think about the upscaling procedure in the spatial domain is the use of spatial block averages of the finescale spatial data to obtain the coarse-scale data. The idea of Markov random field and linear link equation have been used to model multiscale data (Ferreira et al. 2002; Ferreira and Lee 2007). We need to modify this linear link idea in a way so that the forward equations (and the corresponding boundary conditions) remain valid in this upscaling scheme. We upscale the spatial field Y on the coarse-grid, then solve the original system on the coarse-grid with upscaled spatial field (Christie 996;Durlofsky 998). The main theme of the procedure is that given a fine-scale spatial field Y, we can use an operator L (it can be averaging or more complicated integrations with boundary conditions) so that the coarse data y c can be expressed as y c = L(Y ) + ɛ c, where ɛ c is a random error term, which explains the variations from deterministic upscaling procedures. As we have parameterized the spatial field Y using the K-L expansion the final equation is given as y c = L(Y ) + ɛ c = L c (θ,l,l 2,σ 2 ) + ɛ c, where L c can be looked upon as an operator whose input is the fine-scale spatial field or the parameters of the model θ,l,l 2, and σ 2, and output is the coarse-scale value at a given location. We assume that the error ɛ c follows a multivariate normal distribution with mean 0 and covariance σc 2 I, that is, y c θ,l,l 2,σ 2,σc 2 MVN(L c(θ,l,l 2,σ 2 ),σc 2 I). We assume the prior distribution of σc 2 as σ c 2 InverseGamma(a c,b c ). Furthermore, after integrating out σc 2, we obtain the marginal distribution as P (y c θ,l,l 2,σ 2 ) Ɣ(a c + N /2) [ bc + 2 (y c L c (θ,l,l 2,σ 2 ) 2] (a c +N /2), (0) where N is the number of observations of the coarse-scale permeability field. The choice of the upscaling operator L c depends on the forward solver related to the scientific problem. The details about the choice of L c for the reservoir simulation problem are provided in Section The Likelihood and Prior Distributions The likelihood is derived from (2) and (7) as d = G((B(l,l 2,σ 2 )θ)) + ɛ f = F f (θ,l,l 2,σ 2 ) + ɛ f, where F f can be looked upon as a realization from the forward simulator whose input variables are the parameters θ,l,l 2 and σ 2.This realization F f is obtained from the forward simulator through solution of several differential equations. We assume the error distribution as ɛ f MVN(0,σf 2I), that is, d θ,l,l 2,σ 2,σf 2 MVN(F f (θ,l,l 2,σ 2 ),σf 2I). The prior distribution for σ f 2 is assumed to be σf 2 InverseGamma(a f,b f ). Then, after integrating out σf 2, we have the marginal likelihood as P (d θ,l,l 2,σ 2 ) Ɣ(a f + n/2) [ bf + 2 (d F f (θ,l,l 2,σ 2 ) 2] (a f +n/2). () We need to assign prior distribution for the parameters of the covariance kernel. The prior distribution for θ is given by θ σθ 2 MVN(0,σ2 θ I) and σ θ 2 InverseGamma(a 0,b 0 ). Again, after integrating out σθ 2 we obtain the marginal prior distribution as Ɣ(a 0 + m/2) P (θ) [ b0 + 2 θ θ ] (a 0 +m/2). (2) Additionally, the prior distribution for σ 2 is taken to be Gamma(a s,b s ). We assume uniform priors for l and l The Posterior Distribution and its Continuity From (8) we obtain the posterior distribution of the spatial field Y given the output data d, coarse-scale data y c and the observed fine-scale data y o using the Bayes theorem as: P (θ,l,l 2,σ 2 d,y c,y o ) P (d θ,l,l 2,σ 2 )P (y c θ,l,l 2,σ 2 ) P (y o θ,l,l 2,σ 2 )P (θ)p (l,l 2 ) P (σ 2 ). (3) Each part of the expression on the right-hand side are specified in Section 3. For simplicity, all the model unknowns (l,l 2,σ 2,θ o,θ...θ m ) are denoted as τ, F f (θ,l,l 2,σ 2 )is denoted by F τ and L c (θ,l,l 2,σ 2 ) is denoted by L τ in further discussions. Using (9), (0), (), (2), (3), the posterior distribution is given by π(τ) = P (τ d,y c,y o ) [ bf + 2 d F τ 2] (a f +n/2) [ bc + 2 y c L τ 2] (a c +N /2) [ bk + 2 y o y p 2] (a k +N obs /2) [ b0 + 2 θ θ ] (a 0 +m/2) (σ 2 ) as exp(σ 2 /b s ), (4) where d F τ 2 = n i= (d i F τ (i)) 2 for n output observations. In our Bayesian hierarchical model, the likelihood term contains the forward model, G, which is highly nonlinear and, hence, creates an ill-posed inverse problem. Hence, the sensitivity of the solutions to slight perturbations in the data is unacceptably high (O Sullivan 986). In the Bayesian framework, this solution of the inverse problem is the posterior distribution of the unknowns. As a result to show that the Bayesian inverse problem is well-posed, we have to prove that under some regularity conditions small perturbations of the given data do not lead to a large perturbation of the posterior distribution of the unknowns. In other words, we have to show that the posterior

7 386 ANIRBAN MONDAL ET AL. distribution is continuous in a suitable probability metric with respect to changes in data, that is, there exists a unique posterior distribution that depends continuously on the observations. Thus in the Bayesian framework, if we can show that the posterior measure is Lipschitz continuous with respect to the data in the total variation distance, then it guarantees that this Bayesian inverse problem is well-posed (see, e.g., Cotter et al. 2009; Stuart 200). We prove this result for our multiscale Bayesian hierarchical model. To show the continuity of the posterior with respect to the data, we define π z (τ) = Z g(τ,z)π 0(τ), (5) where z is the concatenating dataset, that is, z = (d,y c,y o ) T, g(τ,z) = [ bf + 2 d F τ 2] (a f +n/2) [ bc + 2 y c L τ 2] (a c +N /2) [ bk + 2 y o y p 2] (a k +N obs /2) π 0 (τ) = [ b0 + 2 θ θ ] (a 0 +m/2) (σ 2 ) as exp(σ 2 /b s ) and Z = g(τ,z)π 0 (τ)dτ Theorem. r > 0, C = C(r) such that the posterior measures π and π 2 for two different datasets z and z 2 with max ( z 2, z 2 2 ) r, satisfy π π 2 TV = 2 Z g(τ,z ) Z 2 g(τ,z 2) π 0 (τ)dτ C z z 2 2, where Z and Z 2 are defined by (5) forz and z 2, respectively. The proof is given in the supplementary materials. Note that it can also be shown that the above Lipschitz continuity condition is also valid for Hellinger distance, that is, d Hell (π π 2 ) ( ( Z g(τ,z ) = 2 C z z 2 2 Z 2 g(τ,z 2) ) ) /2 2 π0 (τ)dτ Furthermore, this proof can be extended for a general G with some additional conditions. 4. BAYESIAN COMPUTATION USING MCMC As the posterior is not analytically tractable, hence, we use an MCMC-based computation method to simulate the parameters from the posterior distribution. First, we consider the case, where we fix the number of terms retained in the K-L expansion. We solve the eigenvalue problem for the fine-scale spatial field beforehand and select an m, number of terms retained in the K-L expansion, such that the energy ratio defined in (6) is at least 90%. For a constant m we use the standard Metropolis Hastings MCMC to sample from the posterior. Algorithm (Metropolis-Hastings MCMC), Robert and Casella (2004). Suppose at the rth step we are at the state τ r, then Step. Generate τ from q(τ τ r ). Step 2. Accept τ with probability α(τ r,τ ) = min, P (d τ )P (y c τ )P (y o τ ) P (d τ r )P (y c τ r )P (y o τ r ) }{{} likelihood ratio P (τ ) q τ (τ r τ ). P (τ r ) q }{{} τ (τ τ r ) }{{} prior ratio proposal ratio Starting with an initial parameters of the spatial sample τ 0,the MCMC algorithm generates a Markov chain {τ r }. The target distribution π(τ) is the stationary distribution of the Markov chain τ r,soτ r represent the samples generated from π(τ) after the chain converges and reaches a steady state. As an example we can use standard random walk Metropolis-Hastings algorithm to generate samples from the posterior distribution. Then at the rth step, we propose τ = τ r + h τ u τ, where u τ is generated from a N(0,I) distribution. Here at each iteration step after we propose a new θ,l,l 2,σ 2, we have to solve the eigenvalue problem for the K-L expansion to get the fine-scale spatial realizations, which is very expensive. To speed up the computation, we compute the eigenvalue problem (K-L expansion) for a certain number of pairs of l,l 2 beforehand and interpolate them to find the eigenvalues and eigenvectors at each step in the Metropolis-Hastings MCMC. Note that change of σ doesn t change the eigenvectors, it only changes the magnitude of the eigenvalues which can be adjusted by a scale factor. We can show that this approximation is valid if the interpolation grid of the correlation length is sufficiently small. Since the magnitude of σ doesn t effect the interpolation without loss of generality we can assume σ 2 =. Also here we prove only the isotropic case, that is, l = l 2 = l. Theorem 2. Suppose A l be the covariance matrix for a given correlation length, l. Let λ,λ 2...,λ m be the m ordered eigenvalues considered in the K-L expansion of A l and let φ,φ 2...,φ m be the corresponding orthonormal eigenvectors. Suppose A l+δl be the covariance matrix if we perturb the correlation length l by a small quantity δl. Letλ,λ 2...,λ m be the m ordered eigenvalues considered in the K-L expansion and let φ,φ 2...,φ m be the corresponding orthonormal eigenvectors. then, λ i = λ i + O(δl), and φ i = φ i + O(δl), i. The proof is given in supplementary materials. 5. EXTENSION TO MODEL WITH UNKNOWN M In Section 4, m, the dimension of θ remained fixed, so the number of terms retained in K-L expansion is taken to be a constant. Usually it is estimated by using (6). This method only uses the fine-scale direct data y o but ignores the output data d and the coarse-scale data y c. That way this approach may not capture the actual heterogeneity of the spatial field very well. We extend our previous model by treating m as an additional model unknown and obtain its posterior distribution by conditioning on all the available data. In this situation, using Bayesian

8 BAYESIAN UNCERTAINTY QUANTIFICATION FOR SUBSURFACE INVERSION USING A MULTISCALE HIERARCHICAL MODEL 387 hierarchical model the posterior can be written as π(τ,m) = P (θ,l,l 2,mσ 2 d,y c,y o ) P (d θ,l,l 2,σ 2,m)P (y c θ,l,l 2,σ 2,m) P (y o θ,l,l 2,σ 2,m)P (l,l 2 )P (σ 2 )P (θ m)p (m). (6) We keep all the model specifications same as in Section 3 but use a truncated Poisson prior for P (m). We need to modify the MCMC computation procedure due to this unknown dimension. If we vary the number of terms in the K-L expansion then the dimension of θ will also change in each step. This jumping between different dimensions in the parameter space can be achieved through reversible jump Markov chain Monte Carlo methods as proposed by Green (995). We describe the reversible jump MCMC procedure in our case following the general approach of the reversible jump MCMC (Waagepetersen and Sorensen 200). We assume the prior for m λ as truncated Poisson(λ), truncated at m max, where λ Gamma(ν, β). Integrating out λ we get P (m). All the other terms in (6) remain (/β+) (m+ν+) same as in (4). Algorithm (Reversible Jump MCMC as Birth and Death Process). Suppose at the rth step we are at the state (m r,τ r ), then we have three possible steps: Birth Step: Propose to add the (m r + )th term in the K-L expansion with probability pm b r. Propose θ from q(.) and hence θ = (θ r,θ ). The acceptance probability is given by α mr,m r +(θ r,θ ) = min{, π(θ,m r +)pmr d + π(θ r,m r )pmr b q(θ ) }. Death Step: Propose to delete the (m r )th term with probability pm d r. So here (θ,θ m r r ) = θ r. The acceptance probability is given by α mr,m r (θ r,θ ) = min{, π(θ,m r )pmr b mr q(θr ) π(θ r,m r )pmr d }. Jump Step: Propose a new θ with the same dimension along with l,l 2,σ 2 with probability pm s r. In other words generate τ from q(τ τ r ). The acceptance probability is given by α(τ r,τ ) = min{, π(τ )q(τ r θ ) π(τ r )q(τ τ r ) )}. Here, p b m r + p d m r + p s m r =, m r. 6. TWO-STAGE REVERSIBLE JUMP MCMC The main disadvantage of the above reversible jump MCMC algorithm is the high computational cost of solving the forward model on the fine-grid to compute G in the target distribution π(τ,m). Typically, in our simulations, reversible jump MCMC converges to the steady state after several iterations. That way, a large amount of CPU time is spent on simulating the rejected samples, making the direct (full) reversible jump MCMC simulations very expensive. The direct reversible jump MCMC can be improved by adapting the proposal distribution q(τ,m τ n,m n ) to the target distribution using a coarse-scale model. This can be achieved by a two-stage reversible jump MCMC method, where at first we compare the output from the forward model on a coarsegrid. If the proposal is accepted by the coarse-scale test, then a full fine-scale computation will be conducted and the proposal will be further tested as in the direct reversible jump MCMC method. Otherwise, the proposal will be rejected by the coarse-scale test and a new proposal will be generated from q(τ,m τ n,m n ). The coarse-scale test filters the unacceptable proposals and avoids the expensive fine-scale tests for those proposals. The filtering process essentially modifies the proposal distribution q(τ,m τ n,m n ) by incorporating the coarsescale information about the problem. The algorithm for a general two-stage MCMC method was introduced in Christen and Fox (2005). Our hierarchical model can also take an advantage of inexpensive upscaled simulations to screen the proposals. Here we extend the algorithm to two-stage reversible jump MCMC method. Let Fτ be the output computed by solving the forward model on a coarse-scale for the given fine-scale spatial field with parameters (τ,m). In the case of Reservoir characterization (Section 2.) this is done either with upscaling methods or mixed MsFEM. The fine-scale target distribution π(τ,m) is approximated on the coarse-scale by π (τ,m). Here all the terms in the expression of π (τ,m) are same as that of π(τ,m) except only the likelihood term is [b f + 2 d F τ 2 ] (a f +n/2) replaced by [b f + 2 H d F τ 2 ] (a f +n/2), where, the function H is estimated based on offline computations using independent samples from the prior. More precisely using independent samples from the prior distribution, the spatial fields are generated. Then both the coarse-scale and fine-scale simulations are performed and d F τ versus d Fτ are plotted. This scatterplot data can be modeled by d F τ =H( d Fτ ) + w, where w is a random component representing the deviations of the true finescale error from the predicted error. Using the coarse-scale distribution π (τ) as a filter, the two-stage reversible jump MCMC can be described as follows. Algorithm (Two-stage Reversible Jump MCMC as Birth and Death Process). Suppose at the nth step we are at the state ν n. Let k n be the corresponding fine-scale permeability field. Here ν n = (τ n,m n ) Step. This step is the same as the reversible jump MCMC method described earlier. The only difference is the fractional flow Fν is computed by solving the coarse-scale model. At ν n, generate a trial proposal ν from distribution q( ν ν n ) the same way as in the reversible jump MCMC described earlier, that is, this step is same as doing reversible jump MCMC on π (ν). Step 2. Take the proposal as { ν with probability αp (ν n, ν), ν = with probability α p (ν n, ν), ν n If we are at birth step, then the acceptance probability is given by α p (ν n, ν) = min{, π ( τ,m n +)pmn+ d π (τ n,m n )pmn b q(θ ) }.Ifweare at death step, then the acceptance probability is given by α p (ν n, ν) = min{, π ( τ,m n )pmn b mn q(θn ) π (τ n,m n }. If we are going to )pmn d have only jumps then the acceptance probability is given by α p (ν n, ν) = min{, π ( τ,m n )q(τ n τ) π (τ n,m n )q( τ τ n ) )}. Step 3. Accept ν as a sample with acceptance probability, α f (ν n,ν) = min(, π(ν)π (ν n ) π(ν n )π (ν) ).

9 388 ANIRBAN MONDAL ET AL. To show that the two stage reversible jump MCMC sampling generates a Markov chain, whose stationary distribution is the candidate distribution it is sufficient to show that the transition kernel satisfies the detailed balance condition. Theorem 3. If K(ν n,ν) is the transition kernel of the Markov chain ν n generated by the two-stage reversible jump MCMC, then π(ν n )K(ν n,ν) = π(ν)k(ν, ν n ). The proof is given in supplementary materials. 7. SIMULATION AND REAL EXAMPLES FROM RESERVOIR MODEL The main goal in reservoir modeling is to infer about the important physical properties of the reservoir such as permeability, porosity, fluid saturation, oil-water and gas-oil contact, etc., which are the major contributors to the uncertainties in reservoir performance forecasting, using the direct and indirect observations. In the following examples, we are particularly interested in quantifying and reducing the uncertainties for one of the major characteristics of subsurface property, permeability. Specifically, our goal is to infer about the fine-scale permeability spatial filed using the few fine-scale permeability data obtained as well logs and cores, the coarse-scale data obtained from seismic traces and the indirect observation from the production history such as water-cut or fractional flow. 7. The Mathematical Model and Specification of G The model is described in Section 2. as d = logit[g(y )] + ɛ, where d is the watercut data, Y is the fine-scale permeability field expressed in a logarithmic scale, that is, Y = log(k f ) and G is simulator output by using the log-permeability field Y. We consider a two-phase flow in a subsurface formation over a bounded set D R 2 under the assumption that the fluid displacement is dominated by viscous effects. For clarity of exposition, we neglect the effects of gravity, compressibility, and capillary pressure, although our proposed approach is independent of the choice of physical mechanisms. Furthermore, porosity φ will be considered to be known constant. G is determined by combining the Darcy s law with a statement of conservation of mass and thus solving the pressure and saturation equations which are a couple of partial differential equations. More details about the pressure and saturation equations can be found on Efendiev et al. (2005); Efendiev, Hou, and Luo (2006). The fractional flow or water-cut F depends on the total velocity v and the water saturation S, which are the solutions of the pressure and saturation equations for a given spatial permeability field k f (x) = exp(y (x)) with some boundary conditions on S and p. In other words Y (x) is the input and F is the output for the forward simulator. So F can be written as F = G(Y (x)). Since F is always between 0 and we take a logit transformation on G and write the forward model as: d = logit(g(y (x))) + ɛ. same graph, we illustrate a coarse-scale partition of the domain. Here we consider a single-phase flow upscaling procedure for two-phase flow in heterogeneous porous media. The main idea of the calculation of coarse-scale permeability is that it delivers the same average response of the forward model as that of the underlying fine-scale problem locally in each coarse-block (see Christie 996; Durlofsky 998; Efendiev et al. 2005; Efendiev, Hou, and Luo 2006). For each coarse domain K, wesolvethe local pressure equations in the fine-grid with some coarse-scale boundary conditions. The approach considered here is to replace k f with upscaled coarse permeabilities, k c, which is constant in each fine-grid within the same coarse block. By definition k c is a discrete quantity relying on the discretization of the medium. In particular, k c depends on the location and geometry of the grid-block in which it is computed. The essential requirement of k c is that it leads to pressure and velocity solutions with desired accuracy so that the average response of the forward model in each coarse domain is almost the same as the response from the fine-scale model. In our numerical examples, we take the logarithm of the observed coarse-scale permeability as our coarse data, that is, y c = log(k c ). 7.3 Numerical Results for Simulated Reservoirs In our first example, we consider a simulated reservoir model in which the unknown fine-scale permeability field is taken to be a smooth spatial field on a grid on the unit square. We consider only the isotropic case, that is, we take l = l 2 = l. We generate 5 fine-scale permeability fields from a Gaussian field with the squared exponential covariance structure with l = 0.25 and σ 2 =. The reference permeability field is taken to be the average of these 5 permeability fields. In this model, water is injected at 6 injector wells along the two edges and oil and water is produced in one producing well at the center. The fractional flow or water cut data are generated by using the reference permeability field as inputs in the eclipse software (Eclipse 200) and were validated by the petroleum engineering department at Texas A&M University. The observed 7.2 The Upscaling Procedure Consider the fine-scale spatial field which is defined in the domain with underlying fine grid as shown in Figure 2. Onthe Figure 2. Schematic description of fine- and coarse-grids. Solid lines illustrate a coarse-scale partitioning, while dotted lines show a fine-scale partitioning within coarse-grid cells.

10 BAYESIAN UNCERTAINTY QUANTIFICATION FOR SUBSURFACE INVERSION USING A MULTISCALE HIERARCHICAL MODEL 389 Figure 5. Numerical results for the simulated example. (a) The first quartile of the sampled posterior fine-scale log permeability field. (b) The third quartile of the sampled posterior fine-scale log permeability field. Figure 3. Numerical results using two-stage reversible jump MCMC for the simulated example. (a) The true (reference) fine-scale log permeability field. (b) Initial fine-scale log permeability field. (c) The observed coarse-scale log permeability field. (d) The median of the sampled fine-scale log permeability field. coarse-scale permeability field is calculated using the upscaling procedure in a 5 5 coarse-grid. Our goal is to infer about the fine-scale permeability field using the data at the well locations, coarse-scale data, and the water cut data, and see how closely the predicted field resembles the reference permeability field. The prior for l is taken to be truncated Uniform Distribution over (0., 0.5). The prior for σ 2 is assumed to be a Gamma distribution with hyperparameters a s = 3 and b s = 2. The prior distribution for m is taken to be truncated Poisson distribution truncated at 30, with hyperprior λ following a Gamma distribution with hyperparameters ν = 4 and β = 4. First, we implement the reversible jump MCMC algorithm and draw 250,000 samples from the posterior. After 30,000 burn in period we retain every 0th sample from the posterior. The mode of the posterior distribution of m is 9. The posterior median of fine-scale permeability field is very close to the reference permeability field. The mode of the posterior density of l is near 0.25 and the posterior density of σ 2 is centered at, which are the corresponding original parameter values of the generated reference permeability field. Then we implement the two-stage reversible jump MCMC algorithm with the same reference field and water cut data as we have used in the direct reversible jump MCMC method. The two-stage reversible jump MCMC produced the same results as the direct reversible jump MCMC (see Figures 3 and 4). The two-stage algorithm is much faster as it rejects the bad samples in the first-stage, where we solve the partial differential equations on a coarse-grid. The effective acceptance rate of the two-stage algorithm increases to almost 80%, whereas the regular reversible jump MCMC has an acceptance rate of nearly 0%. We can see that although we have taken almost flat priors for the parameters of our models, the posterior densities have peaks at the corresponding original values of the generated reference permeability field. Thus, we can conclude that integrating data from different sources in the hierarchical model helped to reduce the uncertainties in the unknown fine-scale permeability field. To visualize the uncertainties in the prediction, we plot the first and third quartiles of the sampled fine-scale permeability field from the posterior in Figure 5. In the next example, we consider a case where we assume that coarse-scale data are not available but 0% of the fine-scale data are available at equidistant points. We proceed with the same reference permeability field and water-cut data. We can see from Figure 6 (b) that the posterior median is not close to the true permeability field. Moreover, the posterior distribution of l is centered around 0.42 with standard deviation approximately 0.0. The same procedure is replicated assuming 25% data available (see Figure 6(c)). In this case, the posterior distribution of l is centered around 0.25, the true value of l, with standard 500 (a) 20 (b).4 (c) frequency pdf 0 5 pdf m l σ 2 Figure 4. Posterior distributions using two-stage reversible jump MCMC for the simulated example. (a) Histogram of the posterior distribution of m for two-stage reversible jump MCMC. (b) Posterior density of l. (c) Posterior density of σ 2.

11 390 ANIRBAN MONDAL ET AL. Figure 6. (a) The true fine-scale log permeability field. (b) The median of the sampled fine-scale log permeability field with only 0% fine-scale data observed and no coarse-scale data available. (c) The median of the sampled fine-scale log permeability field with only 25% fine-scale data observed and no coarse-scale data available. Figure 7. Results from two-stage reversible jump MCMC sampling for the PUNQ-S3 model. (a) The true fine-scale log permeability field. (b) Initial fine-scale log permeability field. (c) The observed coarsescale log permeability field. (d) The median of the sampled fine-scale log permeability field. deviation approximately 0.0. The posterior distribution of σ 2 is centered around with standard deviation approximately 0.7. Thus we can conclude that, if coarse-scale data are not available, 0% fine-scale data are not enough to capture the parameters of the model; we need at least 25% of the fine-scale data to infer about the model parameters. 7.4 Numerical Results for a Real Field Example In this section, we apply our model on a real field example, that is, the PUNQ-S3 model dataset. The PUNQ-S3 case is from a reservoir engineering study on a real field performed by Elf Exploration Production. It is qualified as a small-size industrial reservoir engineering model. The model contains grid blocks. The PUNQ-S3 dataset was an experimental study where the true permeability was actually known on the grid, but the researchers were asked not to use the permeability data for their modeling purpose. They were asked to use the production history to infer about the true permeability field and then compare how their model resembles the actual permeability field. For our example, we consider only the top of the five layers in the dataset and follow the same guidelines. We have used the production history, that is, the water-cut data, the permeability data on a 5 5 coarsegrid and the true fine-scale permeability data only on the well locations to infer the fine-scale permeability field. The frequecy (a) m pdf (b) l x pdf (c) σ 2 Figure 8. Posterior distributions for the PUNQ-S3 model. (a) Histogram of the posterior distribution of m. (b) Posterior density of l. (c) Posterior density of σ 2.

12 BAYESIAN UNCERTAINTY QUANTIFICATION FOR SUBSURFACE INVERSION USING A MULTISCALE HIERARCHICAL MODEL 39 Figure 9. (a) The first quartile of the sampled posterior fine-scale log permeability field. (b) The third quartile of the sampled posterior fine-scale log permeability field for the PUNQ-S3 model. permeability measurements are expressed in the unit of md, where md = 0 3 Darcy = 0 2 m 2.Weusealogarithmic transformation of the permeability data and a logit transformation of the fractional flow data in our model. The spatial locations of the field were given to the researchers in a transformed Cartesian coordinate system with each grid of square unit starting from the origin; that is, coordinate of the top-left grid block is (0,0) and that of the bottom-right grid block is (3420, 5040). For simplification, we make another transformation on the coordinate system to a (0,) scale. So in the transformed spatial domain the coordinate of the bottom right grid block is (0.6786,) and each grid block is of size square unit. The fine-scale permeability field is taken to be known at six injector well locations along x = 0 and x = boundaries, that is, on the coordinates (0, 0), (0, 0.5), (0, ), (0.678, 0), (0.678, 0.5), and (0.678, ), and also at the producer well location at the center, that is, on the coordinate (0.339, 0.5). The other inputs, such as pore volume injected, porosity, water saturations, etc., are taken to be known. We use a squared exponential structure for the prior distribution of the fine-scale log-permeability field while doing the Karhunen-Loéve transform. We assume a proper prior for the correlation length, which is uniform on a truncated space. We draw 200,000 samples from the posterior distribution using two-stage reversible jump MCMC method. After 20,000 burn in period, we retain every 0th sample from the posterior. We can see from Figure 7 that the posterior median of the fine-scale permeability is very close to the true permeability field. The mode of the number of coefficients to be retained in KLE expansion is found to be 24. The posterior mode of l is nearly 0.25 and that of σ 2 is nearly 9 (see Figure 8). To visualize the uncertainties in the prediction, we plot the first and third quartiles of the sampled Figure 0. Results for the PUNQ-S3 model with no coarse-scale data. (a) The true fine-scale log permeability field. (b) The median of the sampled fine-scale permeability field. fine-scale permeability field from the posterior in Figure 9.Next we consider the model assuming no coarse-scale data available. From Figure 0 we can see that the posterior median is not very close to the true PUNQ-S3 model. Hence, we can conclude that integrating coarse-scale data in the model helps us to predict the uncertainties in the reservoir more efficiently. The sum of squared errors between the true fine-scale permeability and the posterior median when we use the available coarse-scale data is In contrast, when we only use the fine-scale data at a few well locations but no coarse-scale data, the sum of squared errors becomes CONCLUSIONS We have developed a Bayesian multiscale hierarchical model for large-scale spatial inverse problems. Data from different sources are integrated in the hierarchical model to reduce the uncertainties in the unknown spatial field. We have proven that the posterior is Lipschitz continuous with the data in total variation norm, which ensures that the Bayesian inverse problem is well-posed. A two-stage MCMC technique is exploited for computational efficiency. We have applied our methodology to simulated datasets as well as a real dataset. Alternatively, statistical interpolation techniques like an emulator (Kennedy and O Hagen 200; Higdon et al. 2004; Oakley and O Hagan 2004) can be used in this problem. Development of a multiscale emulator (see, e.g., Craig et al. 996, 997; Cumming and Goldstein 2009; Vernon, Goldstein, and Bower 200) for our inverse problem, where the forward model can be approximated by a Gaussian process regression or spline regression using some training sample of simulation runs, will be our challenging future project. In the simulated and real field applications, we have assumed that the unknown spatial field is stationary and smooth, which may not be true in many practical examples. For example, permeability fields in reservoir models may have major faults or high permeability channel may be embedded in a nearly impermeable background resulting in discontinuities and channelized structure of the spatial field. In such cases, our method can be extended to include a partition of the spatial field and then use a piecewise Gaussian process for the input spatial field (see Kim, Mallick, and Holmes 2005). For a channelized spatial field, the uncertainties in the channel boundaries can be quantified using level-set parameterization (see Mondal et al. 200) or other means and then within each channel K-L expansion of the spatial field can be used. Another important topic, which is not considered in this article, is that the forward simulator G may not represent the physical system perfectly. In that situation, the assumption of independent errors will not hold, and we need to add the model discrepancy term as in Kennedy and O Hagen (200), Higdon, Lee, and Holloman (2003), Goldstein and Rougier (2006), and Goldstein and Rougier (2009). We have also assumed that the combined model discrepancy error and measurement errors are independent, which may not be true in general, for example, the outputs may be correlated over time, in that situation we have to account for the autocorrelation in the model. Finally, in the real physical world, of course, we have to deal with the three-dimensional reservoir model. One of the very

BAYESIAN UNCERTAINTY QUANTIFICATION FOR LARGE SCALE SPATIAL INVERSE PROBLEMS. A Dissertation ANIRBAN MONDAL

BAYESIAN UNCERTAINTY QUANTIFICATION FOR LARGE SCALE SPATIAL INVERSE PROBLEMS. A Dissertation ANIRBAN MONDAL BAYESIAN UNCERTAINTY QUANTIFICATION FOR LARGE SCALE SPATIAL INVERSE PROBLEMS A Dissertation by ANIRBAN MONDAL Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment

More information

PRECONDITIONING MARKOV CHAIN MONTE CARLO SIMULATIONS USING COARSE-SCALE MODELS

PRECONDITIONING MARKOV CHAIN MONTE CARLO SIMULATIONS USING COARSE-SCALE MODELS PRECONDITIONING MARKOV CHAIN MONTE CARLO SIMULATIONS USING COARSE-SCALE MODELS Y. EFENDIEV, T. HOU, AND W. LUO Abstract. We study the preconditioning of Markov Chain Monte Carlo (MCMC) methods using coarse-scale

More information

MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION

MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION P. DOSTERT, Y. EFENDIEV, AND T.Y. HOU Abstract. In this paper, we study multiscale

More information

Use and Abuse of Regression

Use and Abuse of Regression This article was downloaded by: [130.132.123.28] On: 16 May 2015, At: 01:35 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

Gilles Bourgeois a, Richard A. Cunjak a, Daniel Caissie a & Nassir El-Jabi b a Science Brunch, Department of Fisheries and Oceans, Box

Gilles Bourgeois a, Richard A. Cunjak a, Daniel Caissie a & Nassir El-Jabi b a Science Brunch, Department of Fisheries and Oceans, Box This article was downloaded by: [Fisheries and Oceans Canada] On: 07 May 2014, At: 07:15 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office:

More information

Dynamic Data Driven Simulations in Stochastic Environments

Dynamic Data Driven Simulations in Stochastic Environments Computing 77, 321 333 (26) Digital Object Identifier (DOI) 1.17/s67-6-165-3 Dynamic Data Driven Simulations in Stochastic Environments C. Douglas, Lexington, Y. Efendiev, R. Ewing, College Station, V.

More information

Open problems. Christian Berg a a Department of Mathematical Sciences, University of. Copenhagen, Copenhagen, Denmark Published online: 07 Nov 2014.

Open problems. Christian Berg a a Department of Mathematical Sciences, University of. Copenhagen, Copenhagen, Denmark Published online: 07 Nov 2014. This article was downloaded by: [Copenhagen University Library] On: 4 November 24, At: :7 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 72954 Registered office:

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

University, Tempe, Arizona, USA b Department of Mathematics and Statistics, University of New. Mexico, Albuquerque, New Mexico, USA

University, Tempe, Arizona, USA b Department of Mathematics and Statistics, University of New. Mexico, Albuquerque, New Mexico, USA This article was downloaded by: [University of New Mexico] On: 27 September 2012, At: 22:13 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered

More information

Downscaling Seismic Data to the Meter Scale: Sampling and Marginalization. Subhash Kalla LSU Christopher D. White LSU James S.

Downscaling Seismic Data to the Meter Scale: Sampling and Marginalization. Subhash Kalla LSU Christopher D. White LSU James S. Downscaling Seismic Data to the Meter Scale: Sampling and Marginalization Subhash Kalla LSU Christopher D. White LSU James S. Gunning CSIRO Contents Context of this research Background Data integration

More information

Testing Goodness-of-Fit for Exponential Distribution Based on Cumulative Residual Entropy

Testing Goodness-of-Fit for Exponential Distribution Based on Cumulative Residual Entropy This article was downloaded by: [Ferdowsi University] On: 16 April 212, At: 4:53 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 172954 Registered office: Mortimer

More information

The Bayesian approach to inverse problems

The Bayesian approach to inverse problems The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology ymarz@mit.edu, http://uqgroup.mit.edu

More information

INTERPOLATION AND UPDATE IN DYNAMIC DATA-DRIVEN APPLICATION SIMULATIONS

INTERPOLATION AND UPDATE IN DYNAMIC DATA-DRIVEN APPLICATION SIMULATIONS INTERPOLATION AND UPDATE IN DYNAMIC DATA-DRIVEN APPLICATION SIMULATIONS CRAIG C. DOUGLAS University of Kentucky, Department of Computer Science, 325 McVey Hall, Lexington, KY 4561 and Yale University,

More information

Published online: 05 Oct 2006.

Published online: 05 Oct 2006. This article was downloaded by: [Dalhousie University] On: 07 October 2013, At: 17:45 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office:

More information

Online publication date: 01 March 2010 PLEASE SCROLL DOWN FOR ARTICLE

Online publication date: 01 March 2010 PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by: [2007-2008-2009 Pohang University of Science and Technology (POSTECH)] On: 2 March 2010 Access details: Access Details: [subscription number 907486221] Publisher Taylor

More information

Ankara, Turkey Published online: 20 Sep 2013.

Ankara, Turkey Published online: 20 Sep 2013. This article was downloaded by: [Bilkent University] On: 26 December 2013, At: 12:33 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office:

More information

Default Priors and Effcient Posterior Computation in Bayesian

Default Priors and Effcient Posterior Computation in Bayesian Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature

More information

George L. Fischer a, Thomas R. Moore b c & Robert W. Boyd b a Department of Physics and The Institute of Optics,

George L. Fischer a, Thomas R. Moore b c & Robert W. Boyd b a Department of Physics and The Institute of Optics, This article was downloaded by: [University of Rochester] On: 28 May 2015, At: 13:34 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office:

More information

Dresden, Dresden, Germany Published online: 09 Jan 2009.

Dresden, Dresden, Germany Published online: 09 Jan 2009. This article was downloaded by: [SLUB Dresden] On: 11 December 2013, At: 04:59 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

Online publication date: 22 March 2010

Online publication date: 22 March 2010 This article was downloaded by: [South Dakota State University] On: 25 March 2010 Access details: Access Details: [subscription number 919556249] Publisher Taylor & Francis Informa Ltd Registered in England

More information

The American Statistician Publication details, including instructions for authors and subscription information:

The American Statistician Publication details, including instructions for authors and subscription information: This article was downloaded by: [National Chiao Tung University 國立交通大學 ] On: 27 April 2014, At: 23:13 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954

More information

Consistent Downscaling of Seismic Inversions to Cornerpoint Flow Models SPE

Consistent Downscaling of Seismic Inversions to Cornerpoint Flow Models SPE Consistent Downscaling of Seismic Inversions to Cornerpoint Flow Models SPE 103268 Subhash Kalla LSU Christopher D. White LSU James S. Gunning CSIRO Michael E. Glinsky BHP-Billiton Contents Method overview

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

OF SCIENCE AND TECHNOLOGY, TAEJON, KOREA

OF SCIENCE AND TECHNOLOGY, TAEJON, KOREA This article was downloaded by:[kaist Korea Advanced Inst Science & Technology] On: 24 March 2008 Access Details: [subscription number 731671394] Publisher: Taylor & Francis Informa Ltd Registered in England

More information

Discussion on Change-Points: From Sequential Detection to Biology and Back by David Siegmund

Discussion on Change-Points: From Sequential Detection to Biology and Back by David Siegmund This article was downloaded by: [Michael Baron] On: 2 February 213, At: 21:2 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 172954 Registered office: Mortimer

More information

Nacional de La Pampa, Santa Rosa, La Pampa, Argentina b Instituto de Matemática Aplicada San Luis, Consejo Nacional de Investigaciones Científicas

Nacional de La Pampa, Santa Rosa, La Pampa, Argentina b Instituto de Matemática Aplicada San Luis, Consejo Nacional de Investigaciones Científicas This article was downloaded by: [Sonia Acinas] On: 28 June 2015, At: 17:05 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

Online publication date: 30 March 2011

Online publication date: 30 March 2011 This article was downloaded by: [Beijing University of Technology] On: 10 June 2011 Access details: Access Details: [subscription number 932491352] Publisher Taylor & Francis Informa Ltd Registered in

More information

Dissipation Function in Hyperbolic Thermoelasticity

Dissipation Function in Hyperbolic Thermoelasticity This article was downloaded by: [University of Illinois at Urbana-Champaign] On: 18 April 2013, At: 12:23 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954

More information

Diatom Research Publication details, including instructions for authors and subscription information:

Diatom Research Publication details, including instructions for authors and subscription information: This article was downloaded by: [Saúl Blanco] On: 26 May 2012, At: 09:38 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,

More information

Published online: 17 May 2012.

Published online: 17 May 2012. This article was downloaded by: [Central University of Rajasthan] On: 03 December 014, At: 3: Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 107954 Registered

More information

B008 COMPARISON OF METHODS FOR DOWNSCALING OF COARSE SCALE PERMEABILITY ESTIMATES

B008 COMPARISON OF METHODS FOR DOWNSCALING OF COARSE SCALE PERMEABILITY ESTIMATES 1 B8 COMPARISON OF METHODS FOR DOWNSCALING OF COARSE SCALE PERMEABILITY ESTIMATES Alv-Arne Grimstad 1 and Trond Mannseth 1,2 1 RF-Rogaland Research 2 Now with CIPR - Centre for Integrated Petroleum Research,

More information

Computational statistics

Computational statistics Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated

More information

To cite this article: Edward E. Roskam & Jules Ellis (1992) Reaction to Other Commentaries, Multivariate Behavioral Research, 27:2,

To cite this article: Edward E. Roskam & Jules Ellis (1992) Reaction to Other Commentaries, Multivariate Behavioral Research, 27:2, This article was downloaded by: [Memorial University of Newfoundland] On: 29 January 2015, At: 12:02 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered

More information

Geometric View of Measurement Errors

Geometric View of Measurement Errors This article was downloaded by: [University of Virginia, Charlottesville], [D. E. Ramirez] On: 20 May 205, At: 06:4 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number:

More information

Characterizations of Student's t-distribution via regressions of order statistics George P. Yanev a ; M. Ahsanullah b a

Characterizations of Student's t-distribution via regressions of order statistics George P. Yanev a ; M. Ahsanullah b a This article was downloaded by: [Yanev, George On: 12 February 2011 Access details: Access Details: [subscription number 933399554 Publisher Taylor & Francis Informa Ltd Registered in England and Wales

More information

Precise Large Deviations for Sums of Negatively Dependent Random Variables with Common Long-Tailed Distributions

Precise Large Deviations for Sums of Negatively Dependent Random Variables with Common Long-Tailed Distributions This article was downloaded by: [University of Aegean] On: 19 May 2013, At: 11:54 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

A008 THE PROBABILITY PERTURBATION METHOD AN ALTERNATIVE TO A TRADITIONAL BAYESIAN APPROACH FOR SOLVING INVERSE PROBLEMS

A008 THE PROBABILITY PERTURBATION METHOD AN ALTERNATIVE TO A TRADITIONAL BAYESIAN APPROACH FOR SOLVING INVERSE PROBLEMS A008 THE PROAILITY PERTURATION METHOD AN ALTERNATIVE TO A TRADITIONAL AYESIAN APPROAH FOR SOLVING INVERSE PROLEMS Jef AERS Stanford University, Petroleum Engineering, Stanford A 94305-2220 USA Abstract

More information

Guangzhou, P.R. China

Guangzhou, P.R. China This article was downloaded by:[luo, Jiaowan] On: 2 November 2007 Access Details: [subscription number 783643717] Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number:

More information

Spatial smoothing using Gaussian processes

Spatial smoothing using Gaussian processes Spatial smoothing using Gaussian processes Chris Paciorek paciorek@hsph.harvard.edu August 5, 2004 1 OUTLINE Spatial smoothing and Gaussian processes Covariance modelling Nonstationary covariance modelling

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our

More information

NORGES TEKNISK-NATURVITENSKAPELIGE UNIVERSITET. Directional Metropolis Hastings updates for posteriors with nonlinear likelihoods

NORGES TEKNISK-NATURVITENSKAPELIGE UNIVERSITET. Directional Metropolis Hastings updates for posteriors with nonlinear likelihoods NORGES TEKNISK-NATURVITENSKAPELIGE UNIVERSITET Directional Metropolis Hastings updates for posteriors with nonlinear likelihoods by Håkon Tjelmeland and Jo Eidsvik PREPRINT STATISTICS NO. 5/2004 NORWEGIAN

More information

PLEASE SCROLL DOWN FOR ARTICLE. Full terms and conditions of use:

PLEASE SCROLL DOWN FOR ARTICLE. Full terms and conditions of use: This article was downloaded by: [Stanford University] On: 20 July 2010 Access details: Access Details: [subscription number 917395611] Publisher Taylor & Francis Informa Ltd Registered in England and Wales

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods Tomas McKelvey and Lennart Svensson Signal Processing Group Department of Signals and Systems Chalmers University of Technology, Sweden November 26, 2012 Today s learning

More information

The Homogeneous Markov System (HMS) as an Elastic Medium. The Three-Dimensional Case

The Homogeneous Markov System (HMS) as an Elastic Medium. The Three-Dimensional Case This article was downloaded by: [J.-O. Maaita] On: June 03, At: 3:50 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 07954 Registered office: Mortimer House,

More information

University, Wuhan, China c College of Physical Science and Technology, Central China Normal. University, Wuhan, China Published online: 25 Apr 2014.

University, Wuhan, China c College of Physical Science and Technology, Central China Normal. University, Wuhan, China Published online: 25 Apr 2014. This article was downloaded by: [0.9.78.106] On: 0 April 01, At: 16:7 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 10795 Registered office: Mortimer House,

More information

Derivation of SPDEs for Correlated Random Walk Transport Models in One and Two Dimensions

Derivation of SPDEs for Correlated Random Walk Transport Models in One and Two Dimensions This article was downloaded by: [Texas Technology University] On: 23 April 2013, At: 07:52 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning

April 20th, Advanced Topics in Machine Learning California Institute of Technology. Markov Chain Monte Carlo for Machine Learning for for Advanced Topics in California Institute of Technology April 20th, 2017 1 / 50 Table of Contents for 1 2 3 4 2 / 50 History of methods for Enrico Fermi used to calculate incredibly accurate predictions

More information

Statistical Rock Physics

Statistical Rock Physics Statistical - Introduction Book review 3.1-3.3 Min Sun March. 13, 2009 Outline. What is Statistical. Why we need Statistical. How Statistical works Statistical Rock physics Information theory Statistics

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

CCSM: Cross correlogram spectral matching F. Van Der Meer & W. Bakker Published online: 25 Nov 2010.

CCSM: Cross correlogram spectral matching F. Van Der Meer & W. Bakker Published online: 25 Nov 2010. This article was downloaded by: [Universiteit Twente] On: 23 January 2015, At: 06:04 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office:

More information

A short introduction to INLA and R-INLA

A short introduction to INLA and R-INLA A short introduction to INLA and R-INLA Integrated Nested Laplace Approximation Thomas Opitz, BioSP, INRA Avignon Workshop: Theory and practice of INLA and SPDE November 7, 2018 2/21 Plan for this talk

More information

New Insights into History Matching via Sequential Monte Carlo

New Insights into History Matching via Sequential Monte Carlo New Insights into History Matching via Sequential Monte Carlo Associate Professor Chris Drovandi School of Mathematical Sciences ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS)

More information

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference

Bayesian Estimation of DSGE Models 1 Chapter 3: A Crash Course in Bayesian Inference 1 The views expressed in this paper are those of the authors and do not necessarily reflect the views of the Federal Reserve Board of Governors or the Federal Reserve System. Bayesian Estimation of DSGE

More information

Gaussian processes for spatial modelling in environmental health: parameterizing for flexibility vs. computational efficiency

Gaussian processes for spatial modelling in environmental health: parameterizing for flexibility vs. computational efficiency Gaussian processes for spatial modelling in environmental health: parameterizing for flexibility vs. computational efficiency Chris Paciorek March 11, 2005 Department of Biostatistics Harvard School of

More information

ABC methods for phase-type distributions with applications in insurance risk problems

ABC methods for phase-type distributions with applications in insurance risk problems ABC methods for phase-type with applications problems Concepcion Ausin, Department of Statistics, Universidad Carlos III de Madrid Joint work with: Pedro Galeano, Universidad Carlos III de Madrid Simon

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

Large Scale Modeling by Bayesian Updating Techniques

Large Scale Modeling by Bayesian Updating Techniques Large Scale Modeling by Bayesian Updating Techniques Weishan Ren Centre for Computational Geostatistics Department of Civil and Environmental Engineering University of Alberta Large scale models are useful

More information

c 2016 Society for Industrial and Applied Mathematics

c 2016 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 8, No. 5, pp. A779 A85 c 6 Society for Industrial and Applied Mathematics ACCELERATING MARKOV CHAIN MONTE CARLO WITH ACTIVE SUBSPACES PAUL G. CONSTANTINE, CARSON KENT, AND TAN

More information

Multiple Scenario Inversion of Reflection Seismic Prestack Data

Multiple Scenario Inversion of Reflection Seismic Prestack Data Downloaded from orbit.dtu.dk on: Nov 28, 2018 Multiple Scenario Inversion of Reflection Seismic Prestack Data Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus Publication date: 2013 Document Version

More information

Full terms and conditions of use:

Full terms and conditions of use: This article was downloaded by:[smu Cul Sci] [Smu Cul Sci] On: 28 March 2007 Access Details: [subscription number 768506175] Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered

More information

Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox Richard A. Norton, J.

Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox Richard A. Norton, J. Deblurring Jupiter (sampling in GLIP faster than regularized inversion) Colin Fox fox@physics.otago.ac.nz Richard A. Norton, J. Andrés Christen Topics... Backstory (?) Sampling in linear-gaussian hierarchical

More information

Lecture 7 and 8: Markov Chain Monte Carlo

Lecture 7 and 8: Markov Chain Monte Carlo Lecture 7 and 8: Markov Chain Monte Carlo 4F13: Machine Learning Zoubin Ghahramani and Carl Edward Rasmussen Department of Engineering University of Cambridge http://mlg.eng.cam.ac.uk/teaching/4f13/ Ghahramani

More information

The Fourier transform of the unit step function B. L. Burrows a ; D. J. Colwell a a

The Fourier transform of the unit step function B. L. Burrows a ; D. J. Colwell a a This article was downloaded by: [National Taiwan University (Archive)] On: 10 May 2011 Access details: Access Details: [subscription number 905688746] Publisher Taylor & Francis Informa Ltd Registered

More information

An introduction to Bayesian statistics and model calibration and a host of related topics

An introduction to Bayesian statistics and model calibration and a host of related topics An introduction to Bayesian statistics and model calibration and a host of related topics Derek Bingham Statistics and Actuarial Science Simon Fraser University Cast of thousands have participated in the

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS

ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS ECO 513 Fall 2009 C. Sims HIDDEN MARKOV CHAIN MODELS 1. THE CLASS OF MODELS y t {y s, s < t} p(y t θ t, {y s, s < t}) θ t = θ(s t ) P[S t = i S t 1 = j] = h ij. 2. WHAT S HANDY ABOUT IT Evaluating the

More information

Full terms and conditions of use:

Full terms and conditions of use: This article was downloaded by:[rollins, Derrick] [Rollins, Derrick] On: 26 March 2007 Access Details: [subscription number 770393152] Publisher: Taylor & Francis Informa Ltd Registered in England and

More information

Adaptive Posterior Approximation within MCMC

Adaptive Posterior Approximation within MCMC Adaptive Posterior Approximation within MCMC Tiangang Cui (MIT) Colin Fox (University of Otago) Mike O Sullivan (University of Auckland) Youssef Marzouk (MIT) Karen Willcox (MIT) 06/January/2012 C, F,

More information

Erciyes University, Kayseri, Turkey

Erciyes University, Kayseri, Turkey This article was downloaded by:[bochkarev, N.] On: 7 December 27 Access Details: [subscription number 746126554] Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number:

More information

Variational Principal Components

Variational Principal Components Variational Principal Components Christopher M. Bishop Microsoft Research 7 J. J. Thomson Avenue, Cambridge, CB3 0FB, U.K. cmbishop@microsoft.com http://research.microsoft.com/ cmbishop In Proceedings

More information

Communications in Algebra Publication details, including instructions for authors and subscription information:

Communications in Algebra Publication details, including instructions for authors and subscription information: This article was downloaded by: [Professor Alireza Abdollahi] On: 04 January 2013, At: 19:35 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

Variational Methods in Bayesian Deconvolution

Variational Methods in Bayesian Deconvolution PHYSTAT, SLAC, Stanford, California, September 8-, Variational Methods in Bayesian Deconvolution K. Zarb Adami Cavendish Laboratory, University of Cambridge, UK This paper gives an introduction to the

More information

Park, Pennsylvania, USA. Full terms and conditions of use:

Park, Pennsylvania, USA. Full terms and conditions of use: This article was downloaded by: [Nam Nguyen] On: 11 August 2012, At: 09:14 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

Metropolis-Hastings Algorithm

Metropolis-Hastings Algorithm Strength of the Gibbs sampler Metropolis-Hastings Algorithm Easy algorithm to think about. Exploits the factorization properties of the joint probability distribution. No difficult choices to be made to

More information

Statistical Data Analysis Stat 3: p-values, parameter estimation

Statistical Data Analysis Stat 3: p-values, parameter estimation Statistical Data Analysis Stat 3: p-values, parameter estimation London Postgraduate Lectures on Particle Physics; University of London MSci course PH4515 Glen Cowan Physics Department Royal Holloway,

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Version of record first published: 01 Sep 2006.

Version of record first published: 01 Sep 2006. This article was downloaded by: [University of Miami] On: 27 November 2012, At: 08:47 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office:

More information

Computational Challenges in Reservoir Modeling. Sanjay Srinivasan The Pennsylvania State University

Computational Challenges in Reservoir Modeling. Sanjay Srinivasan The Pennsylvania State University Computational Challenges in Reservoir Modeling Sanjay Srinivasan The Pennsylvania State University Well Data 3D view of well paths Inspired by an offshore development 4 platforms 2 vertical wells 2 deviated

More information

Downloaded from:

Downloaded from: Camacho, A; Kucharski, AJ; Funk, S; Breman, J; Piot, P; Edmunds, WJ (2014) Potential for large outbreaks of Ebola virus disease. Epidemics, 9. pp. 70-8. ISSN 1755-4365 DOI: https://doi.org/10.1016/j.epidem.2014.09.003

More information

Online publication date: 12 January 2010

Online publication date: 12 January 2010 This article was downloaded by: [Zhang, Lanju] On: 13 January 2010 Access details: Access Details: [subscription number 918543200] Publisher Taylor & Francis Informa Ltd Registered in England and Wales

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Markov Chain Monte Carlo Methods Barnabás Póczos & Aarti Singh Contents Markov Chain Monte Carlo Methods Goal & Motivation Sampling Rejection Importance Markov

More information

Bayesian lithology/fluid inversion comparison of two algorithms

Bayesian lithology/fluid inversion comparison of two algorithms Comput Geosci () 14:357 367 DOI.07/s596-009-9155-9 REVIEW PAPER Bayesian lithology/fluid inversion comparison of two algorithms Marit Ulvmoen Hugo Hammer Received: 2 April 09 / Accepted: 17 August 09 /

More information

Bayesian model selection for computer model validation via mixture model estimation

Bayesian model selection for computer model validation via mixture model estimation Bayesian model selection for computer model validation via mixture model estimation Kaniav Kamary ATER, CNAM Joint work with É. Parent, P. Barbillon, M. Keller and N. Bousquet Outline Computer model validation

More information

A GENERALIZED CONVECTION-DIFFUSION MODEL FOR SUBGRID TRANSPORT IN POROUS MEDIA

A GENERALIZED CONVECTION-DIFFUSION MODEL FOR SUBGRID TRANSPORT IN POROUS MEDIA MULTISCALE MODEL. SIMUL. Vol. 1, No. 3, pp. 504 526 c 2003 Society for Industrial and Applied Mathematics A GENERALIZED CONVECTION-DIFFUSION MODEL FOR SUBGRID TRANSPORT IN POROUS MEDIA Y. EFENDIEV AND

More information

Probabilistic Graphical Models

Probabilistic Graphical Models 10-708 Probabilistic Graphical Models Homework 3 (v1.1.0) Due Apr 14, 7:00 PM Rules: 1. Homework is due on the due date at 7:00 PM. The homework should be submitted via Gradescope. Solution to each problem

More information

Markov Chain Monte Carlo

Markov Chain Monte Carlo Department of Statistics The University of Auckland https://www.stat.auckland.ac.nz/~brewer/ Emphasis I will try to emphasise the underlying ideas of the methods. I will not be teaching specific software

More information

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced

More information

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Development of Stochastic Artificial Neural Networks for Hydrological Prediction Development of Stochastic Artificial Neural Networks for Hydrological Prediction G. B. Kingston, M. F. Lambert and H. R. Maier Centre for Applied Modelling in Water Engineering, School of Civil and Environmental

More information

Supplementary Note on Bayesian analysis

Supplementary Note on Bayesian analysis Supplementary Note on Bayesian analysis Structured variability of muscle activations supports the minimal intervention principle of motor control Francisco J. Valero-Cuevas 1,2,3, Madhusudhan Venkadesan

More information

Principles of Bayesian Inference

Principles of Bayesian Inference Principles of Bayesian Inference Sudipto Banerjee University of Minnesota July 20th, 2008 1 Bayesian Principles Classical statistics: model parameters are fixed and unknown. A Bayesian thinks of parameters

More information

Tong University, Shanghai , China Published online: 27 May 2014.

Tong University, Shanghai , China Published online: 27 May 2014. This article was downloaded by: [Shanghai Jiaotong University] On: 29 July 2014, At: 01:51 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered

More information

MEASUREMENT UNCERTAINTY AND SUMMARISING MONTE CARLO SAMPLES

MEASUREMENT UNCERTAINTY AND SUMMARISING MONTE CARLO SAMPLES XX IMEKO World Congress Metrology for Green Growth September 9 14, 212, Busan, Republic of Korea MEASUREMENT UNCERTAINTY AND SUMMARISING MONTE CARLO SAMPLES A B Forbes National Physical Laboratory, Teddington,

More information

Markov Chain Monte Carlo methods

Markov Chain Monte Carlo methods Markov Chain Monte Carlo methods By Oleg Makhnin 1 Introduction a b c M = d e f g h i 0 f(x)dx 1.1 Motivation 1.1.1 Just here Supresses numbering 1.1.2 After this 1.2 Literature 2 Method 2.1 New math As

More information

Bayesian model selection: methodology, computation and applications

Bayesian model selection: methodology, computation and applications Bayesian model selection: methodology, computation and applications David Nott Department of Statistics and Applied Probability National University of Singapore Statistical Genomics Summer School Program

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

17 : Markov Chain Monte Carlo

17 : Markov Chain Monte Carlo 10-708: Probabilistic Graphical Models, Spring 2015 17 : Markov Chain Monte Carlo Lecturer: Eric P. Xing Scribes: Heran Lin, Bin Deng, Yun Huang 1 Review of Monte Carlo Methods 1.1 Overview Monte Carlo

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information