Generalized Randomized Maximum Likelihood

Size: px
Start display at page:

Download "Generalized Randomized Maximum Likelihood"

Transcription

1 Generalized Randomized Maximum Likelihood Andreas S. Stordal & Geir Nævdal ISAPP, Delft 2015

2 Why Bayesian history matching doesn t work, cont d Previous talk:

3 Why Bayesian history matching doesn t work, cont d Previous talk: Prior model not well idenitified (missing geology and structural uncertainty)

4 Why Bayesian history matching doesn t work, cont d Previous talk: Prior model not well idenitified (missing geology and structural uncertainty) The model is wrong. Assumption of perfect model is not justified in real applications

5 Why Bayesian history matching doesn t work, cont d Previous talk: Prior model not well idenitified (missing geology and structural uncertainty) The model is wrong. Assumption of perfect model is not justified in real applications Measurement uncertainty is unknown

6 Why Bayesian history matching doesn t work, cont d Previous talk: Prior model not well idenitified (missing geology and structural uncertainty) The model is wrong. Assumption of perfect model is not justified in real applications Measurement uncertainty is unknown This talk:

7 Why Bayesian history matching doesn t work, cont d Previous talk: Prior model not well idenitified (missing geology and structural uncertainty) The model is wrong. Assumption of perfect model is not justified in real applications Measurement uncertainty is unknown This talk: We can t even find the Bayesian solution for large models

8 Why Bayesian history matching doesn t work, cont d Previous talk: Prior model not well idenitified (missing geology and structural uncertainty) The model is wrong. Assumption of perfect model is not justified in real applications Measurement uncertainty is unknown This talk: We can t even find the Bayesian solution for large models In a controlled environment we can make small improvements

9 Why Bayesian history matching doesn t work, cont d Previous talk: Prior model not well idenitified (missing geology and structural uncertainty) The model is wrong. Assumption of perfect model is not justified in real applications Measurement uncertainty is unknown This talk: We can t even find the Bayesian solution for large models In a controlled environment we can make small improvements Also, for the Norne field, the concensus is that an ensemble based Bayesian approach has given the best history matched model so far

10 Randomized Maximum Likelihood Generalization to nonlinear models of Gaussian posterior sampling via minimization [Kitanidis, 1995, Oliver et al., 1996]

11 Randomized Maximum Likelihood Generalization to nonlinear models of Gaussian posterior sampling via minimization [Kitanidis, 1995, Oliver et al., 1996] Gaussian prior Φ(X µ P), Gaussian likelihood Φ(y H(x) R).

12 Randomized Maximum Likelihood Generalization to nonlinear models of Gaussian posterior sampling via minimization [Kitanidis, 1995, Oliver et al., 1996] Gaussian prior Φ(X µ P), Gaussian likelihood Φ(y H(x) R). A sample is obtained by

13 Randomized Maximum Likelihood Generalization to nonlinear models of Gaussian posterior sampling via minimization [Kitanidis, 1995, Oliver et al., 1996] Gaussian prior Φ(X µ P), Gaussian likelihood Φ(y H(x) R). A sample is obtained by Sample ɛ Φ(0 P)

14 Randomized Maximum Likelihood Generalization to nonlinear models of Gaussian posterior sampling via minimization [Kitanidis, 1995, Oliver et al., 1996] Gaussian prior Φ(X µ P), Gaussian likelihood Φ(y H(x) R). A sample is obtained by Sample ɛ Φ(0 P) Sample η Φ(0 R)

15 Randomized Maximum Likelihood Generalization to nonlinear models of Gaussian posterior sampling via minimization [Kitanidis, 1995, Oliver et al., 1996] Gaussian prior Φ(X µ P), Gaussian likelihood Φ(y H(x) R). A sample is obtained by Sample ɛ Φ(0 P) Sample η Φ(0 R) Minimize O η,ɛ (x) = x (µ + ɛ) 2 P + y + η H(x) 2 R. Correct Bayesian sampling if H is linear. Otherwise not. Minimization using Levenberg-Marquardt.

16 Motivation (a) RML (b) Generalized RML Figure : RML sampling

17 Same model different behavior General statements that "RML works well for this and this model" is problematic

18 Figure : RML sampling Same model different behavior General statements that "RML works well for this and this model" is problematic Same model, same realization of reference, only change realization measurement noise. (a) RML (b) RML

19 Levenberg Marquardt Iterations At iteration k + 1, update [ ] 1 X k+1 = X k + (1 + λ k+1 )P 1 + Gk R 1 G k P 1 (µ + ɛ X k ) [ ] 1 + PGk (1 + λ k+1 )R + G k PGk (y + η H(Xk ))

20 Levenberg Marquardt Iterations At iteration k + 1, update [ ] 1 X k+1 = X k + (1 + λ k+1 )P 1 + Gk R 1 G k P 1 (µ + ɛ X k ) [ ] 1 + PGk (1 + λ k+1 )R + G k PGk (y + η H(Xk )) One model mismatch part and one data mismatch part.

21 Levenberg Marquardt Iterations At iteration k + 1, update [ ] 1 X k+1 = X k + (1 + λ k+1 )P 1 + Gk R 1 G k P 1 (µ + ɛ X k ) [ ] 1 + PGk (1 + λ k+1 )R + G k PGk (y + η H(Xk )) One model mismatch part and one data mismatch part. G k is sensitivity matrix of H(X k ) (hard to obtain without adjoint code).

22 Levenberg Marquardt Iterations At iteration k + 1, update [ ] 1 X k+1 = X k + (1 + λ k+1 )P 1 + Gk R 1 G k P 1 (µ + ɛ X k ) [ ] 1 + PGk (1 + λ k+1 )R + G k PGk (y + η H(Xk )) One model mismatch part and one data mismatch part. G k is sensitivity matrix of H(X k ) (hard to obtain without adjoint code). λ k+1 regularization parameter (step size).

23 Ensemble approximation Replace sensitivity G k with ensemble average [Chen and Oliver, 2013].

24 Ensemble approximation Replace sensitivity G k with ensemble average [Chen and Oliver, 2013]. Non standard form replacing P with P k = cov ( {X i k }N i=1) in the Hessian.

25 Ensemble approximation Replace sensitivity G k with ensemble average [Chen and Oliver, 2013]. Non standard form replacing P with P k = cov ( {Xk i=1) i }N in the Hessian. [ ] 1 X k+1 = X k + (1 + λ k+1 )P 1 k + Gk R 1 G k P 1 (µ + ɛ X k ) + P k G k [ (1 + λk+1 )R + G k P k Gk ] 1 (y + η H(Xk ))

26 Ensemble approximation Replace sensitivity G k with ensemble average [Chen and Oliver, 2013]. Non standard form replacing P with P k = cov ( {Xk i=1) i }N in the Hessian. [ ] 1 X k+1 = X k + (1 + λ k+1 )P 1 k + Gk R 1 G k P 1 (µ + ɛ X k ) + P k G k [ (1 + λk+1 )R + G k P k Gk ] 1 (y + η H(Xk )) G k = [H(X 1 k ) H k... H(X N k ) H k][x 1 k X k... X N k X k] +.

27 Ensemble approximation Replace sensitivity G k with ensemble average [Chen and Oliver, 2013]. Non standard form replacing P with P k = cov ( {Xk i=1) i }N in the Hessian. [ ] 1 X k+1 = X k + (1 + λ k+1 )P 1 k + Gk R 1 G k P 1 (µ + ɛ X k ) + P k G k [ (1 + λk+1 )R + G k P k Gk ] 1 (y + η H(Xk )) G k = [H(X 1 k ) H k... H(X N k ) H k][x 1 k X k... X N k X k] +. For the ensemble version G k P k G k and cov ( {H(X i k )}N i=1) are estimating the same quantity but without pseudo inverse (will be used later).

28 Ensemble approximation Replace sensitivity G k with ensemble average [Chen and Oliver, 2013]. Non standard form replacing P with P k = cov ( {Xk i=1) i }N in the Hessian. [ ] 1 X k+1 = X k + (1 + λ k+1 )P 1 k + Gk R 1 G k P 1 (µ + ɛ X k ) + P k G k [ (1 + λk+1 )R + G k P k Gk ] 1 (y + η H(Xk )) G k = [H(X 1 k ) H k... H(X N k ) H k][x 1 k X k... X N k X k] +. For the ensemble version G k P k G k and cov ( {H(X i k )}N i=1) are estimating the same quantity but without pseudo inverse (will be used later). For standard RML G k P k G k is only a linearization of cov ( {H(X i k )}N i=1).

29 GRML part one - weighting the prior O γ η,ɛ(x) = γ x (µ + ɛ) 2 P + y + η H(x) 2 R

30 GRML part one - weighting the prior O γ η,ɛ(x) = γ x (µ + ɛ) 2 P + y + η H(x) 2 R (L-M) GRML update equation: [ ] 1 X k+1 = X k + (1 + λ k+1 )P 1 + γ 1 Gk R 1 G k P 1 (µ + ɛ X k ) [ ] 1 + PGk (1 + λ k+1 )γr + G k PGk (y + η H(Xk ))

31 Non standard ensemble versions GEnRML [ ] 1 X k+1 = X k + (1 + λ k+1 )P 1 k + γ 1 Gk T R 1 G k P 1 (µ + ɛ X k ) [ ] 1 + P k Gk (1 + λ k+1 )γr + G k P k Gk T (y + η H(Xk ))

32 Non standard ensemble versions GEnRML [ ] 1 X k+1 = X k + (1 + λ k+1 )P 1 k + γ 1 Gk T R 1 G k P 1 (µ + ɛ X k ) [ ] 1 + P k Gk (1 + λ k+1 )γr + G k P k Gk T (y + η H(Xk )) GEnRML-approx X k+1 = X k + P k G k [ ] 1 (1 + λ k+1 )γr + G k P k Gk (y + η H(Xk ))

33 Difficult γ For the correct posterior π (necessary condition) E π [ d dx O(X) ] = 0, O(x) = x µ 2 P + y H(x) 2 R.

34 Difficult γ For the correct posterior π (necessary condition) E π [ d dx O(X) ] = 0, O(x) = x µ 2 P + y H(x) 2 R. For RML E RML [ d dx O(X) ] = 2E [ ] G X R 1 η = 0 iff H(x) = Hx.

35 Difficult γ For the correct posterior π (necessary condition) E π [ d dx O(X) ] = 0, O(x) = x µ 2 P + y H(x) 2 R. For RML E RML [ d dx O(X) ] = 2E [ ] G X R 1 η = 0 iff H(x) = Hx. For GRML E GRML [ d dx O(X) ] = 2E [ ] [ ] G X R 1 η + 2(1 γ)e P 1 (X µ).

36 Difficult γ For the correct posterior π (necessary condition) E π [ d dx O(X) ] = 0, O(x) = x µ 2 P + y H(x) 2 R. For RML E RML [ d dx O(X) ] = 2E [ ] G X R 1 η = 0 iff H(x) = Hx. For GRML E GRML [ d dx O(X) ] = 2E [ ] [ ] G X R 1 η + 2(1 γ)e P 1 (X µ). In theory: minimize absolute value with respect to γ.

37 (a) RML (b) γ = 0.2 (c) RML (d) γ = 3 (e) RML (f) γ = 1.9

38 Practical Problems (Chen and Oliver [2013]) "Unfortunately, the method (EnRML) generally stopped reducing the data mismatch at a level that was still unacceptably high." (Psuedo inverse)

39 Practical Problems (Chen and Oliver [2013]) "Unfortunately, the method (EnRML) generally stopped reducing the data mismatch at a level that was still unacceptably high." (Psuedo inverse) Measure: Drop model mismatch term in gradient (EnRML-approx).

40 Practical Problems (Chen and Oliver [2013]) "Unfortunately, the method (EnRML) generally stopped reducing the data mismatch at a level that was still unacceptably high." (Psuedo inverse) Measure: Drop model mismatch term in gradient (EnRML-approx). "In general, the total objective function is difficult to compute, so in most practical applications stopping is based on the magnitude of the data mismatch."

41 Practical Problems (Chen and Oliver [2013]) "Unfortunately, the method (EnRML) generally stopped reducing the data mismatch at a level that was still unacceptably high." (Psuedo inverse) Measure: Drop model mismatch term in gradient (EnRML-approx). "In general, the total objective function is difficult to compute, so in most practical applications stopping is based on the magnitude of the data mismatch." Measure: Drop model mismatch term in objective function.

42 Practical Problems (Chen and Oliver [2013]) "Unfortunately, the method (EnRML) generally stopped reducing the data mismatch at a level that was still unacceptably high." (Psuedo inverse) Measure: Drop model mismatch term in gradient (EnRML-approx). "In general, the total objective function is difficult to compute, so in most practical applications stopping is based on the magnitude of the data mismatch." Measure: Drop model mismatch term in objective function. Claim: First is good for ensemble versions, second is potentially bad.

43 Practical Problems (Chen and Oliver [2013]) "Unfortunately, the method (EnRML) generally stopped reducing the data mismatch at a level that was still unacceptably high." (Psuedo inverse) Measure: Drop model mismatch term in gradient (EnRML-approx). "In general, the total objective function is difficult to compute, so in most practical applications stopping is based on the magnitude of the data mismatch." Measure: Drop model mismatch term in objective function. Claim: First is good for ensemble versions, second is potentially bad. Second part leads to Bayesian algorithms that are stopping criteria dependent and thus are formulated only in terms of the method used for minimization.

44 GEnRML part two (pgenrml) If H is linear and invertible x (µ + ɛ) 2 P = Hx H(µ + ɛ) 2 HPH

45 GEnRML part two (pgenrml) If H is linear and invertible x (µ + ɛ) 2 P = Hx H(µ + ɛ) 2 HPH For nonlinear model, use (a number) of measurements for prior term and define O γ η,ɛ(x) = γ Hx H(µ + ɛ) 2 HPH + y + η H(x) 2 R,

46 GEnRML part two (pgenrml) If H is linear and invertible x (µ + ɛ) 2 P = Hx H(µ + ɛ) 2 HPH For nonlinear model, use (a number) of measurements for prior term and define O γ η,ɛ(x) = γ Hx H(µ + ɛ) 2 HPH + y + η H(x) 2 R, with notation HPH = cov (H(X))

47 GEnRML part two (pgenrml) If H is linear and invertible x (µ + ɛ) 2 P = Hx H(µ + ɛ) 2 HPH For nonlinear model, use (a number) of measurements for prior term and define O γ η,ɛ(x) = γ Hx H(µ + ɛ) 2 HPH + y + η H(x) 2 R, with notation HPH = cov (H(X)) HPH has to be estimated, ideally choose num meas < N.

48 GEnRML part two (pgenrml) If H is linear and invertible x (µ + ɛ) 2 P = Hx H(µ + ɛ) 2 HPH For nonlinear model, use (a number) of measurements for prior term and define O γ η,ɛ(x) = γ Hx H(µ + ɛ) 2 HPH + y + η H(x) 2 R, with notation HPH = cov (H(X)) HPH has to be estimated, ideally choose num meas < N. Prior term easy to evaluate and much better estimated.

49 GEnRML part two (pgenrml) If H is linear and invertible x (µ + ɛ) 2 P = Hx H(µ + ɛ) 2 HPH For nonlinear model, use (a number) of measurements for prior term and define O γ η,ɛ(x) = γ Hx H(µ + ɛ) 2 HPH + y + η H(x) 2 R, with notation HPH = cov (H(X)) HPH has to be estimated, ideally choose num meas < N. Prior term easy to evaluate and much better estimated. Ideally other functionals should be used for non-gaussian models.

50 Ensemble updates pgenrml-approx Hessian approximation ( G k ( (1 + λk )(HP k H ) 1 + γ 1 R 1) G k ) 1.

51 Ensemble updates pgenrml-approx Hessian approximation ( G k ( (1 + λk )(HP k H ) 1 + γ 1 R 1) G k ) 1. However, in the ensemble framework HP k H = G k P k Gk so ( ( G k (1 + λk )(HP k H ) 1 + γ 1 R 1) ) 1 G k = ( ) 1 = (1 + λ k )P 1 k + γ 1 Gk R 1 G k

52 Ensemble updates pgenrml-approx Hessian approximation ( G k ( (1 + λk )(HP k H ) 1 + γ 1 R 1) G k ) 1. However, in the ensemble framework HP k H = G k P k Gk so ( ( G k (1 + λk )(HP k H ) 1 + γ 1 R 1) ) 1 G k = ( ) 1 = (1 + λ k )P 1 k + γ 1 Gk R 1 G k The approx update (pgenrml-approx) is X k = X k 1 + P k G ( (1 + λ)g k P k G k + γr ) 1 (di H(X k )).

53 Ensemble updates pgenrml-approx Hessian approximation ( G k ( (1 + λk )(HP k H ) 1 + γ 1 R 1) G k ) 1. However, in the ensemble framework HP k H = G k P k Gk so ( ( G k (1 + λk )(HP k H ) 1 + γ 1 R 1) ) 1 G k = ( ) 1 = (1 + λ k )P 1 k + γ 1 Gk R 1 G k The approx update (pgenrml-approx) is X k = X k 1 + P k G ( (1 + λ)g k P k G k + γr ) 1 (di H(X k )). Only stopping criteria and γ is different from EnRML-approx!

54 Same model as in Chen and Oliver [2013] (a) RML (b) RML approx (c) p γ = 1 (d) p γ = 3.5 Figure : Estimates with R=1

55 (a) RML (b) RML approx (c) p γ = 1 (d) p γ = 4 Figure : Estimates with R=16

56 MCMC on a 2D model grid with unknown permeability values. Gaussian prior.

57 MCMC on a 2D model grid with unknown permeability values. Gaussian prior. Model and the simulator (SimSim) used to solve the flow equations are developed at TU Delft Jansen [2011] (slightly compressible two phase flow).

58 MCMC on a 2D model grid with unknown permeability values. Gaussian prior. Model and the simulator (SimSim) used to solve the flow equations are developed at TU Delft Jansen [2011] (slightly compressible two phase flow). 40 measurements (bhp in injector, oil rates in 4 producers, 8 time steps).

59 MCMC on a 2D model grid with unknown permeability values. Gaussian prior. Model and the simulator (SimSim) used to solve the flow equations are developed at TU Delft Jansen [2011] (slightly compressible two phase flow). 40 measurements (bhp in injector, oil rates in 4 producers, 8 time steps). Two independent MCMC chains of runs.

60 MCMC on a 2D model grid with unknown permeability values. Gaussian prior. Model and the simulator (SimSim) used to solve the flow equations are developed at TU Delft Jansen [2011] (slightly compressible two phase flow). 40 measurements (bhp in injector, oil rates in 4 producers, 8 time steps). Two independent MCMC chains of runs. After burn-in and de-correlation 1960 (independent) samples are chosen.

61 MCMC on a 2D model grid with unknown permeability values. Gaussian prior. Model and the simulator (SimSim) used to solve the flow equations are developed at TU Delft Jansen [2011] (slightly compressible two phase flow). 40 measurements (bhp in injector, oil rates in 4 producers, 8 time steps). Two independent MCMC chains of runs. After burn-in and de-correlation 1960 (independent) samples are chosen. Run different EnRML versions with 100 ensemble members.

62 MCMC on a 2D model grid with unknown permeability values. Gaussian prior. Model and the simulator (SimSim) used to solve the flow equations are developed at TU Delft Jansen [2011] (slightly compressible two phase flow). 40 measurements (bhp in injector, oil rates in 4 producers, 8 time steps). Two independent MCMC chains of runs. After burn-in and de-correlation 1960 (independent) samples are chosen. Run different EnRML versions with 100 ensemble members. Note: EnRML and GEnRML use full covariance matrix while pgenrml-approx use an estimated.

63 Highlighted results - Distance to MCMC mean Method grad obj func γ mean #iter EnRML approx D D GEnRML D and M D and M EnRML D and M D and M GEnRML approx D D and M pgenrml-approx D D and DM Table : Summary statistics from different EnRML approaches

64 Results with γ = 1 Method mean #iter EnRML approx penrml approx Table : Summary statistics from different EnRML approaches

65 Conclusions RML (EnRML) is a very promising tool for Bayesian history matching.

66 Conclusions RML (EnRML) is a very promising tool for Bayesian history matching. The performance of RML (and EnRML) can be improved by weighting the prior term of the objective function

67 Conclusions RML (EnRML) is a very promising tool for Bayesian history matching. The performance of RML (and EnRML) can be improved by weighting the prior term of the objective function For EnRML the model mismatch term in the gradient typically leads to premature convergence and should be dropped

68 Conclusions RML (EnRML) is a very promising tool for Bayesian history matching. The performance of RML (and EnRML) can be improved by weighting the prior term of the objective function For EnRML the model mismatch term in the gradient typically leads to premature convergence and should be dropped If this term is dropped for the objective function, the result is typically overfitting

69 Conclusions RML (EnRML) is a very promising tool for Bayesian history matching. The performance of RML (and EnRML) can be improved by weighting the prior term of the objective function For EnRML the model mismatch term in the gradient typically leads to premature convergence and should be dropped If this term is dropped for the objective function, the result is typically overfitting If the model mismatch term is problematic, a prior term can be evaluated in (part of) the measuerement space

70 Conclusions RML (EnRML) is a very promising tool for Bayesian history matching. The performance of RML (and EnRML) can be improved by weighting the prior term of the objective function For EnRML the model mismatch term in the gradient typically leads to premature convergence and should be dropped If this term is dropped for the objective function, the result is typically overfitting If the model mismatch term is problematic, a prior term can be evaluated in (part of) the measuerement space The gradient without model mismatch is equal to that of EnRML-approx

71 Conclusions RML (EnRML) is a very promising tool for Bayesian history matching. The performance of RML (and EnRML) can be improved by weighting the prior term of the objective function For EnRML the model mismatch term in the gradient typically leads to premature convergence and should be dropped If this term is dropped for the objective function, the result is typically overfitting If the model mismatch term is problematic, a prior term can be evaluated in (part of) the measuerement space The gradient without model mismatch is equal to that of EnRML-approx Results better than EnRML, much better than EnRML-approx and as good as modified EnRML for a 2D model

72 Acknowledgements The first author acknowledges the Uni Research CIPR/IRIS cooperative research project "4D Seismic History Matching" which is funded by its industry partners, Total, Eni and Petrobras, and the Research Council of Norway (PETROMAKS2). The second author acknowledges the national IOR centre of Norway and its participants, The Research Council of Norway, ConocoPhillips, BP, Det Norske Oljeselskap, Eni, Maersk, DONG Energy, Statoil, GDF Suez, Lundin, Halliburton, Schlumberger and Wintershall.

73 References Y. Chen and D. S. Oliver. Levenberg Marquardt forms of the iterative ensemble smoother for efficient history matching and uncertainty quantification. Computational Geosciences, pages 1 15, H. W. Engl, M. Hanke, and A. Neubauer. Regularization of inverse problems, volume 375. Springer Science & Business Media, J. D. Jansen. SimSim: A simple reservoir simulator. TU Delft, Departement of Geotechnology, P. K. Kitanidis. Quasi-linear geostatistical theory for inversing. Water resources research, 31(10): , D. S. Oliver, N. He, and A. C. Reynolds. Conditioning permeability fields to pressure data. In European Conference for the Mathematics of Oil Recovery, V, pages 1 11, 1996.

Assessing the Value of Information from Inverse Modelling for Optimising Long-Term Oil Reservoir Performance

Assessing the Value of Information from Inverse Modelling for Optimising Long-Term Oil Reservoir Performance Assessing the Value of Information from Inverse Modelling for Optimising Long-Term Oil Reservoir Performance Eduardo Barros, TU Delft Paul Van den Hof, TU Eindhoven Jan Dirk Jansen, TU Delft 1 Oil & gas

More information

A Model for Non-Newtonian Flow in Porous Media at Different Flow Regimes

A Model for Non-Newtonian Flow in Porous Media at Different Flow Regimes A Model for Non-Newtonian Flow in Porous Media at Different Flow Regimes Quick introduction to polymer flooding Outline of talk Polymer behaviour in bulk versus porous medium Mathematical modeling of polymer

More information

Integrated Reservoir Study for Designing CO 2 -Foam EOR Field Pilot

Integrated Reservoir Study for Designing CO 2 -Foam EOR Field Pilot Integrated Reservoir Study for Designing CO 2 -Foam EOR Field Pilot Universitetet i Stavanger uis.no M. Sharma*, Z. P. Alcorn #, S. Fredriksen # M. Fernø # and A. Graue # * The National IOR Centre of Norway,

More information

Explaining and modelling the rheology of polymeric fluids with the kinetic theory

Explaining and modelling the rheology of polymeric fluids with the kinetic theory Explaining and modelling the rheology of polymeric fluids with the kinetic theory Dmitry Shogin University of Stavanger The National IOR Centre of Norway IOR Norway 2016 Workshop April 25, 2016 Overview

More information

Impact of choke valves on the IOR polymer flooding Lessons learned from large scale tests

Impact of choke valves on the IOR polymer flooding Lessons learned from large scale tests Impact of choke valves on the IOR polymer flooding Lessons learned from large scale tests Amare Mebratu, Halliburton; Arne Stavland and Siv Marie Åsen, IRIS; and Flavien Gathier, SNF Outline Obje?ve Introduc?on

More information

Application of the Ensemble Kalman Filter to History Matching

Application of the Ensemble Kalman Filter to History Matching Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization

More information

Introduction to IORSIM

Introduction to IORSIM Introduction to IORSIM The National IOR Centre of Norway, University of Stavanger 2 The National IOR Centre of Norway, IRIS 3 The National IOR Centre of Norway, IFE 4 The National IOR Centre of Norway,

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML

More information

B008 COMPARISON OF METHODS FOR DOWNSCALING OF COARSE SCALE PERMEABILITY ESTIMATES

B008 COMPARISON OF METHODS FOR DOWNSCALING OF COARSE SCALE PERMEABILITY ESTIMATES 1 B8 COMPARISON OF METHODS FOR DOWNSCALING OF COARSE SCALE PERMEABILITY ESTIMATES Alv-Arne Grimstad 1 and Trond Mannseth 1,2 1 RF-Rogaland Research 2 Now with CIPR - Centre for Integrated Petroleum Research,

More information

The Ensemble Kalman Filter:

The Ensemble Kalman Filter: p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen 23, Ocean Dynamics, Vol 53, No 4 p.2 The Ensemble

More information

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop Music and Machine Learning (IFT68 Winter 8) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

More information

A013 HISTORY MATCHING WITH RESPECT TO RESERVOIR STRUCTURE

A013 HISTORY MATCHING WITH RESPECT TO RESERVOIR STRUCTURE A3 HISTORY MATCHING WITH RESPECT TO RESERVOIR STRUCTURE SIGURD IVAR AANONSEN ; ODDVAR LIA ; AND OLE JAKOB ARNTZEN Centre for Integrated Research, University of Bergen, Allégt. 4, N-7 Bergen, Norway Statoil

More information

Core scale EME for IOR

Core scale EME for IOR Core scale EME for IOR Experiment Modelling - Experiment Bergit Brattekås Post Doc The National IOR Centre of Norway Aksel Hiorth Arne Stavland Chief scientist Martin A. Fernø THE PROJECT TEAM Arild Lohne

More information

Bayesian rules of probability as principles of logic [Cox] Notation: pr(x I) is the probability (or pdf) of x being true given information I

Bayesian rules of probability as principles of logic [Cox] Notation: pr(x I) is the probability (or pdf) of x being true given information I Bayesian rules of probability as principles of logic [Cox] Notation: pr(x I) is the probability (or pdf) of x being true given information I 1 Sum rule: If set {x i } is exhaustive and exclusive, pr(x

More information

Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems

Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems Alen Alexanderian (Math/NC State), Omar Ghattas (ICES/UT-Austin), Noémi Petra (Applied Math/UC

More information

EnKF-based particle filters

EnKF-based particle filters EnKF-based particle filters Jana de Wiljes, Sebastian Reich, Wilhelm Stannat, Walter Acevedo June 20, 2017 Filtering Problem Signal dx t = f (X t )dt + 2CdW t Observations dy t = h(x t )dt + R 1/2 dv t.

More information

Multiple Scenario Inversion of Reflection Seismic Prestack Data

Multiple Scenario Inversion of Reflection Seismic Prestack Data Downloaded from orbit.dtu.dk on: Nov 28, 2018 Multiple Scenario Inversion of Reflection Seismic Prestack Data Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus Publication date: 2013 Document Version

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tyrus Berry George Mason University NJIT Feb. 28, 2017 Postdoc supported by NSF This work is in collaboration with: Tim Sauer, GMU Franz Hamilton, Postdoc, NCSU

More information

An Integrated Workflow for Seismic Data Conditioning and Modern Prestack Inversion Applied to the Odin Field. P.E.Harris, O.I.Frette, W.T.

An Integrated Workflow for Seismic Data Conditioning and Modern Prestack Inversion Applied to the Odin Field. P.E.Harris, O.I.Frette, W.T. An Integrated Workflow for Seismic Data Conditioning and Modern Prestack Inversion Applied to the Odin Field P.E.Harris, O.I.Frette, W.T.Shea Talk Outline Introduction Motivation Introducing Pcube+ Gather

More information

Consistent Downscaling of Seismic Inversions to Cornerpoint Flow Models SPE

Consistent Downscaling of Seismic Inversions to Cornerpoint Flow Models SPE Consistent Downscaling of Seismic Inversions to Cornerpoint Flow Models SPE 103268 Subhash Kalla LSU Christopher D. White LSU James S. Gunning CSIRO Michael E. Glinsky BHP-Billiton Contents Method overview

More information

Dynamic Data Driven Simulations in Stochastic Environments

Dynamic Data Driven Simulations in Stochastic Environments Computing 77, 321 333 (26) Digital Object Identifier (DOI) 1.17/s67-6-165-3 Dynamic Data Driven Simulations in Stochastic Environments C. Douglas, Lexington, Y. Efendiev, R. Ewing, College Station, V.

More information

Short tutorial on data assimilation

Short tutorial on data assimilation Mitglied der Helmholtz-Gemeinschaft Short tutorial on data assimilation 23 June 2015 Wolfgang Kurtz & Harrie-Jan Hendricks Franssen Institute of Bio- and Geosciences IBG-3 (Agrosphere), Forschungszentrum

More information

Probing the covariance matrix

Probing the covariance matrix Probing the covariance matrix Kenneth M. Hanson Los Alamos National Laboratory (ret.) BIE Users Group Meeting, September 24, 2013 This presentation available at http://kmh-lanl.hansonhub.com/ LA-UR-06-5241

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Uncertainty quantification for Wavefield Reconstruction Inversion

Uncertainty quantification for Wavefield Reconstruction Inversion Uncertainty quantification for Wavefield Reconstruction Inversion Zhilong Fang *, Chia Ying Lee, Curt Da Silva *, Felix J. Herrmann *, and Rachel Kuske * Seismic Laboratory for Imaging and Modeling (SLIM),

More information

The Kalman Filter ImPr Talk

The Kalman Filter ImPr Talk The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman

More information

Generalization. A cat that once sat on a hot stove will never again sit on a hot stove or on a cold one either. Mark Twain

Generalization. A cat that once sat on a hot stove will never again sit on a hot stove or on a cold one either. Mark Twain Generalization Generalization A cat that once sat on a hot stove will never again sit on a hot stove or on a cold one either. Mark Twain 2 Generalization The network input-output mapping is accurate for

More information

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k

j=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

Bayesian Regression Linear and Logistic Regression

Bayesian Regression Linear and Logistic Regression When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we

More information

Delivery: an open-source Bayesian seismic inversion tool. James Gunning, CSIRO Petroleum Michael Glinsky,, BHP Billiton

Delivery: an open-source Bayesian seismic inversion tool. James Gunning, CSIRO Petroleum Michael Glinsky,, BHP Billiton Delivery: an open-source Bayesian seismic inversion tool James Gunning, CSIRO Petroleum Michael Glinsky,, BHP Billiton Application areas Development, appraisal and exploration situations Requisite information

More information

Data assimilation with and without a model

Data assimilation with and without a model Data assimilation with and without a model Tim Sauer George Mason University Parameter estimation and UQ U. Pittsburgh Mar. 5, 2017 Partially supported by NSF Most of this work is due to: Tyrus Berry,

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

Simulation study of density-driven natural convection mechanism in isotropic and anisotropic brine aquifers using a black oil reservoir simulator

Simulation study of density-driven natural convection mechanism in isotropic and anisotropic brine aquifers using a black oil reservoir simulator Available online at www.sciencedirect.com Energy Procedia 37 (23 ) 5562 5569 GHGT- Simulation study of density-driven natural convection mechanism in isotropic and anisotropic brine aquifers using a black

More information

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1 Lecture 5 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics,

More information

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems

Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.

More information

Bayesian Lithology-Fluid Prediction and Simulation based. on a Markov Chain Prior Model

Bayesian Lithology-Fluid Prediction and Simulation based. on a Markov Chain Prior Model Bayesian Lithology-Fluid Prediction and Simulation based on a Markov Chain Prior Model Anne Louise Larsen Formerly Norwegian University of Science and Technology, N-7491 Trondheim, Norway; presently Schlumberger

More information

Truncated Conjugate Gradient Method for History Matching in Reservoir Simulation

Truncated Conjugate Gradient Method for History Matching in Reservoir Simulation Trabalho apresentado no CNAC, Gramado - RS, 2016. Proceeding Series of the Brazilian Society of Computational and Applied athematics Truncated Conjugate Gradient ethod for History atching in Reservoir

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Probabilistic & Bayesian deep learning. Andreas Damianou

Probabilistic & Bayesian deep learning. Andreas Damianou Probabilistic & Bayesian deep learning Andreas Damianou Amazon Research Cambridge, UK Talk at University of Sheffield, 19 March 2019 In this talk Not in this talk: CRFs, Boltzmann machines,... In this

More information

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation

More information

Parameter Estimation in Reservoir Engineering Models via Data Assimilation Techniques

Parameter Estimation in Reservoir Engineering Models via Data Assimilation Techniques Parameter Estimation in Reservoir Engineering Models via Data Assimilation Techniques Mariya V. Krymskaya TU Delft July 6, 2007 Ensemble Kalman Filter (EnKF) Iterative Ensemble Kalman Filter (IEnKF) State

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Overfitting, Bias / Variance Analysis

Overfitting, Bias / Variance Analysis Overfitting, Bias / Variance Analysis Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 8, 207 / 40 Outline Administration 2 Review of last lecture 3 Basic

More information

The Ensemble Kalman Filter:

The Ensemble Kalman Filter: p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen, Ocean Dynamics, Vol 5, No p. The Ensemble

More information

Covariance Matrix Simplification For Efficient Uncertainty Management

Covariance Matrix Simplification For Efficient Uncertainty Management PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part

More information

Model-based optimization of oil and gas production

Model-based optimization of oil and gas production IPAM Long Program Computational Issues in Oil Field Applications - Tutorials UCLA, 21-24 March 217 Model-based optimization of oil and gas production Jan Dir Jansen Delft University of Technology j.d.jansen@tudelft.nl

More information

Copyright. Marco Antonio Iglesias-Hernandez

Copyright. Marco Antonio Iglesias-Hernandez Copyright by Marco Antonio Iglesias-Hernandez 28 The Dissertation Committee for Marco Antonio Iglesias-Hernandez certifies that this is the approved version of the following dissertation: An Iterative

More information

Data assimilation using Bayesian filters and B- spline geological models

Data assimilation using Bayesian filters and B- spline geological models Journal of Physics: Conference Series Data assimilation using Bayesian filters and B- spline geological models To cite this article: Lian Duan et al 211 J. Phys.: Conf. Ser. 29 124 View the article online

More information

Assisted History Matching

Assisted History Matching The University of Tulsa Petroleum Reservoir Exploitation Projects (TUPREP) Assisted History Matching Al Reynolds Tutorial Lecture on Assisted History Matching IPAM Workship III, Data Assimilation, Uncertainty

More information

Ensemble Kalman filter for automatic history matching of geologic facies

Ensemble Kalman filter for automatic history matching of geologic facies Journal of Petroleum Science and Engineering 47 (2005) 147 161 www.elsevier.com/locate/petrol Ensemble Kalman filter for automatic history matching of geologic facies Ning LiuT, Dean S. Oliver School of

More information

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model

Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model UNIVERSITY OF TEXAS AT SAN ANTONIO Hastings-within-Gibbs Algorithm: Introduction and Application on Hierarchical Model Liang Jing April 2010 1 1 ABSTRACT In this paper, common MCMC algorithms are introduced

More information

Markov chain Monte Carlo methods in atmospheric remote sensing

Markov chain Monte Carlo methods in atmospheric remote sensing 1 / 45 Markov chain Monte Carlo methods in atmospheric remote sensing Johanna Tamminen johanna.tamminen@fmi.fi ESA Summer School on Earth System Monitoring and Modeling July 3 Aug 11, 212, Frascati July,

More information

Contrastive Divergence

Contrastive Divergence Contrastive Divergence Training Products of Experts by Minimizing CD Hinton, 2002 Helmut Puhr Institute for Theoretical Computer Science TU Graz June 9, 2010 Contents 1 Theory 2 Argument 3 Contrastive

More information

Machine Learning Lecture 5

Machine Learning Lecture 5 Machine Learning Lecture 5 Linear Discriminant Functions 26.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course Outline Fundamentals Bayes Decision Theory

More information

Downscaling Seismic Data to the Meter Scale: Sampling and Marginalization. Subhash Kalla LSU Christopher D. White LSU James S.

Downscaling Seismic Data to the Meter Scale: Sampling and Marginalization. Subhash Kalla LSU Christopher D. White LSU James S. Downscaling Seismic Data to the Meter Scale: Sampling and Marginalization Subhash Kalla LSU Christopher D. White LSU James S. Gunning CSIRO Contents Context of this research Background Data integration

More information

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015

CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 CLASS NOTES Computational Methods for Engineering Applications I Spring 2015 Petros Koumoutsakos Gerardo Tauriello (Last update: July 27, 2015) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material

More information

Reading Group on Deep Learning Session 1

Reading Group on Deep Learning Session 1 Reading Group on Deep Learning Session 1 Stephane Lathuiliere & Pablo Mesejo 2 June 2016 1/31 Contents Introduction to Artificial Neural Networks to understand, and to be able to efficiently use, the popular

More information

TSRT14: Sensor Fusion Lecture 8

TSRT14: Sensor Fusion Lecture 8 TSRT14: Sensor Fusion Lecture 8 Particle filter theory Marginalized particle filter Gustaf Hendeby gustaf.hendeby@liu.se TSRT14 Lecture 8 Gustaf Hendeby Spring 2018 1 / 25 Le 8: particle filter theory,

More information

Frequentist-Bayesian Model Comparisons: A Simple Example

Frequentist-Bayesian Model Comparisons: A Simple Example Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal

More information

Outline Lecture 2 2(32)

Outline Lecture 2 2(32) Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic

More information

Bayesian Updating: Discrete Priors: Spring

Bayesian Updating: Discrete Priors: Spring Bayesian Updating: Discrete Priors: 18.05 Spring 2017 http://xkcd.com/1236/ Learning from experience Which treatment would you choose? 1. Treatment 1: cured 100% of patients in a trial. 2. Treatment 2:

More information

Dean S. Oliver, Albert C. Reynolds, Fengjun Zhang, Ruijian Li, Yafes Abacioglu & Yannong Dong

Dean S. Oliver, Albert C. Reynolds, Fengjun Zhang, Ruijian Li, Yafes Abacioglu & Yannong Dong Report Title: Report Type: Mapping of Reservoir Properties and Facies Through Integration of Static and Dynamic Data Annual Technical Report Reporting Period Start Date: October 1, 2000 Reporting Period

More information

State-Space Methods for Inferring Spike Trains from Calcium Imaging

State-Space Methods for Inferring Spike Trains from Calcium Imaging State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

Bayesian lithology/fluid inversion comparison of two algorithms

Bayesian lithology/fluid inversion comparison of two algorithms Comput Geosci () 14:357 367 DOI.07/s596-009-9155-9 REVIEW PAPER Bayesian lithology/fluid inversion comparison of two algorithms Marit Ulvmoen Hugo Hammer Received: 2 April 09 / Accepted: 17 August 09 /

More information

Dynamic Bayesian Networks as a Possible Alternative to the Ensemble Kalman Filter for Parameter Estimation in Reservoir Engineering

Dynamic Bayesian Networks as a Possible Alternative to the Ensemble Kalman Filter for Parameter Estimation in Reservoir Engineering Dynamic Bayesian Networks as a Possible Alternative to the Ensemble Kalman Filter for Parameter Estimation in Reservoir Engineering Anca Hanea a, Remus Hanea a,b, Aurelius Zilko a a Institute of Applied

More information

Parameter estimation by implicit sampling

Parameter estimation by implicit sampling Parameter estimation by implicit sampling Matthias Morzfeld 1,2,, Xuemin Tu 3, Jon Wilkening 1,2 and Alexandre J. Chorin 1,2 1 Department of Mathematics, University of California, Berkeley, CA 94720, USA.

More information

Relevance Vector Machines

Relevance Vector Machines LUT February 21, 2011 Support Vector Machines Model / Regression Marginal Likelihood Regression Relevance vector machines Exercise Support Vector Machines The relevance vector machine (RVM) is a bayesian

More information

Hydrocarbon Reservoir Parameter Estimation Using Production Data and Time-Lapse Seismic

Hydrocarbon Reservoir Parameter Estimation Using Production Data and Time-Lapse Seismic Hydrocarbon Reservoir Parameter Estimation Using Production Data and Time-Lapse Seismic Hydrocarbon Reservoir Parameter Estimation Using Production Data and Time-Lapse Seismic PROEFSCHRIFT ter verkrijging

More information

Learning from Data: Regression

Learning from Data: Regression November 3, 2005 http://www.anc.ed.ac.uk/ amos/lfd/ Classification or Regression? Classification: want to learn a discrete target variable. Regression: want to learn a continuous target variable. Linear

More information

Estimation of Relative Permeability Parameters in Reservoir Engineering Applications

Estimation of Relative Permeability Parameters in Reservoir Engineering Applications Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics Master of Science Thesis Estimation of Relative Permeability Parameters

More information

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence

Bayesian Inference in GLMs. Frequentists typically base inferences on MLEs, asymptotic confidence Bayesian Inference in GLMs Frequentists typically base inferences on MLEs, asymptotic confidence limits, and log-likelihood ratio tests Bayesians base inferences on the posterior distribution of the unknowns

More information

Fully Bayesian Deep Gaussian Processes for Uncertainty Quantification

Fully Bayesian Deep Gaussian Processes for Uncertainty Quantification Fully Bayesian Deep Gaussian Processes for Uncertainty Quantification N. Zabaras 1 S. Atkinson 1 Center for Informatics and Computational Science Department of Aerospace and Mechanical Engineering University

More information

Parameter Estimation in Reservoir Engineering Models via Data Assimilation Techniques

Parameter Estimation in Reservoir Engineering Models via Data Assimilation Techniques Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics Master of Science Thesis Parameter Estimation in Reservoir Engineering

More information

Generalized Gradient Descent Algorithms

Generalized Gradient Descent Algorithms ECE 275AB Lecture 11 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 11 ECE 275A Generalized Gradient Descent Algorithms ECE 275AB Lecture 11 Fall 2008 V1.1 c K. Kreutz-Delgado, UC San

More information

Kernel Adaptive Metropolis-Hastings

Kernel Adaptive Metropolis-Hastings Kernel Adaptive Metropolis-Hastings Arthur Gretton,?? Gatsby Unit, CSML, University College London NIPS, December 2015 Arthur Gretton (Gatsby Unit, UCL) Kernel Adaptive Metropolis-Hastings 12/12/2015 1

More information

Outline. 1 Introduction. 2 Problem and solution. 3 Bayesian tracking model of group debris. 4 Simulation results. 5 Conclusions

Outline. 1 Introduction. 2 Problem and solution. 3 Bayesian tracking model of group debris. 4 Simulation results. 5 Conclusions Outline 1 Introduction 2 Problem and solution 3 Bayesian tracking model of group debris 4 Simulation results 5 Conclusions Problem The limited capability of the radar can t always satisfy the detection

More information

Adaptive ensemble Kalman filtering of nonlinear systems

Adaptive ensemble Kalman filtering of nonlinear systems Adaptive ensemble Kalman filtering of nonlinear systems Tyrus Berry George Mason University June 12, 213 : Problem Setup We consider a system of the form: x k+1 = f (x k ) + ω k+1 ω N (, Q) y k+1 = h(x

More information

We LHR3 04 Realistic Uncertainty Quantification in Geostatistical Seismic Reservoir Characterization

We LHR3 04 Realistic Uncertainty Quantification in Geostatistical Seismic Reservoir Characterization We LHR3 04 Realistic Uncertainty Quantification in Geostatistical Seismic Reservoir Characterization A. Moradi Tehrani* (CGG), A. Stallone (Roma Tre University), R. Bornard (CGG) & S. Boudon (CGG) SUMMARY

More information

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

Lecture 4: Types of errors. Bayesian regression models. Logistic regression Lecture 4: Types of errors. Bayesian regression models. Logistic regression A Bayesian interpretation of regularization Bayesian vs maximum likelihood fitting more generally COMP-652 and ECSE-68, Lecture

More information

Lecture 5: GPs and Streaming regression

Lecture 5: GPs and Streaming regression Lecture 5: GPs and Streaming regression Gaussian Processes Information gain Confidence intervals COMP-652 and ECSE-608, Lecture 5 - September 19, 2017 1 Recall: Non-parametric regression Input space X

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

SPE History Matching Using the Ensemble Kalman Filter on a North Sea Field Case

SPE History Matching Using the Ensemble Kalman Filter on a North Sea Field Case SPE- 1243 History Matching Using the Ensemble Kalman Filter on a North Sea Field Case Vibeke Haugen, SPE, Statoil ASA, Lars-Jørgen Natvik, Statoil ASA, Geir Evensen, Hydro, Aina Berg, IRIS, Kristin Flornes,

More information

INTERPOLATION AND UPDATE IN DYNAMIC DATA-DRIVEN APPLICATION SIMULATIONS

INTERPOLATION AND UPDATE IN DYNAMIC DATA-DRIVEN APPLICATION SIMULATIONS INTERPOLATION AND UPDATE IN DYNAMIC DATA-DRIVEN APPLICATION SIMULATIONS CRAIG C. DOUGLAS University of Kentucky, Department of Computer Science, 325 McVey Hall, Lexington, KY 4561 and Yale University,

More information

Ellipsoidal Methods for Adaptive Choice-based Conjoint Analysis (CBCA)

Ellipsoidal Methods for Adaptive Choice-based Conjoint Analysis (CBCA) Ellipsoidal Methods for Adaptive Choice-based Conjoint Analysis (CBCA) Juan Pablo Vielma Massachusetts Institute of Technology Joint work with Denis Saure Operations Management Seminar, Rotman School of

More information

Ensemble Kalman Filter

Ensemble Kalman Filter Ensemble Kalman Filter Geir Evensen and Laurent Bertino Hydro Research Centre, Bergen, Norway, Nansen Environmental and Remote Sensing Center, Bergen, Norway The Ensemble Kalman Filter (EnKF) Represents

More information

Strong Lens Modeling (II): Statistical Methods

Strong Lens Modeling (II): Statistical Methods Strong Lens Modeling (II): Statistical Methods Chuck Keeton Rutgers, the State University of New Jersey Probability theory multiple random variables, a and b joint distribution p(a, b) conditional distribution

More information

GRADIENT = STEEPEST DESCENT

GRADIENT = STEEPEST DESCENT GRADIENT METHODS GRADIENT = STEEPEST DESCENT Convex Function Iso-contours gradient 0.5 0.4 4 2 0 8 0.3 0.2 0. 0 0. negative gradient 6 0.2 4 0.3 2.5 0.5 0 0.5 0.5 0 0.5 0.4 0.5.5 0.5 0 0.5 GRADIENT DESCENT

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

An Iterative EnKF for Strongly Nonlinear Systems

An Iterative EnKF for Strongly Nonlinear Systems 1988 M O N T H L Y W E A T H E R R E V I E W VOLUME 140 An Iterative EnKF for Strongly Nonlinear Systems PAVEL SAKOV Nansen Environmental and Remote Sensing Center, Bergen, Norway DEAN S. OLIVER Uni Centre

More information

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018

CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 CLASS NOTES Models, Algorithms and Data: Introduction to computing 2018 Petros Koumoutsakos, Jens Honore Walther (Last update: June 11, 2018) IMPORTANT DISCLAIMERS 1. REFERENCES: Much of the material (ideas,

More information

Approximate Bayesian Computation and Particle Filters

Approximate Bayesian Computation and Particle Filters Approximate Bayesian Computation and Particle Filters Dennis Prangle Reading University 5th February 2014 Introduction Talk is mostly a literature review A few comments on my own ongoing research See Jasra

More information

The Matrix Reloaded: Computations for large spatial data sets

The Matrix Reloaded: Computations for large spatial data sets The Matrix Reloaded: Computations for large spatial data sets Doug Nychka National Center for Atmospheric Research The spatial model Solving linear systems Matrix multiplication Creating sparsity Sparsity,

More information

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation. CS 189 Spring 2015 Introduction to Machine Learning Midterm You have 80 minutes for the exam. The exam is closed book, closed notes except your one-page crib sheet. No calculators or electronic items.

More information

Bayesian parameter estimation in predictive engineering

Bayesian parameter estimation in predictive engineering Bayesian parameter estimation in predictive engineering Damon McDougall Institute for Computational Engineering and Sciences, UT Austin 14th August 2014 1/27 Motivation Understand physical phenomena Observations

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information