Journal of Computational Physics

Size: px
Start display at page:

Download "Journal of Computational Physics"

Transcription

1 Journal of Computational Physics 242 (23) Contents lists available at SciVerse ScienceDirect Journal of Computational Physics journal homepage: An iterative stochastic ensemble method for parameter estimation of subsurface flow models Ahmed H. Elsheikh a,b,c,, Mary F. Wheeler a, Ibrahim Hoteit b,c a Center for Subsurface Modeling (CSM), Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX, USA b Dept. of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia c Dept. of Applied Mathematics and Computational Sciences, King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia article info abstract Article history: Received 28 September 22 Received in revised form 25 January 23 Accepted 28 January 23 Available online 7 February 23 Keywords: Parameter estimation Subsurface flow models Iterative stochastic ensemble method Regularization Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. Ó 23 Elsevier Inc. All rights reserved.. Introduction Inference of subsurface geological properties is essential for many fields. Accurate prediction of groundwater flow and the fate of subsurface contaminants is one example [,2]. The multiphase flow of hydrocarbons oil reservoirs is another example where accurate predictions have large economic impact [3,4]. Subsurface domains are generally heterogeneous and show a wide range of heterogeneities in many physical attributes such as the permeability and porosity fields. In order to build a high-fidelity subsurface flow model a large number of parameters have to be specified. These parameters are obtained through a parameter estimation step. However, the amount of available data to constrain the inverse problem is usually limited in both quantity and quality. This results in an ill-posed inverse problem that might admit many different solutions. Different parameter estimation techniques can be applied to tackle this inverse problem. These techniques can be classified into Bayesian methods based on Markov Chain Monte Carlo (MCMC) methods [5 7], gradient based optimization methods [,2] and ensemble Kalman filtering methods [8 ]. In the current paper an iterative stochastic ensemble method (ISEM) is proposed. ISEM can be considered as an iterative Wiener filter [] as the problem is assumed non-stationary for the unknown parameters. Also, it can be considered as a quasi-newton type algorithm with a random stencil. A strong connection between the proposed algorithm and the ensemble Corresponding author. Address: Center for Subsurface Modeling, Institute for Computational Engineering and Sciences, The University of Texas at Austin, 2 East 24th Street, Campus Mail C2, Austin, TX 7872, USA. address: aelsheikh@ices.utexas.edu (A.H. Elsheikh) /$ - see front matter Ó 23 Elsevier Inc. All rights reserved.

2 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) Kalman filter can be made. Ensemble Kalman filter (EnKF) is a parallel sequential Monte Carlo method (SMC) for data assimilation. EnKF was introduced by Evensen [2] and since then has been used for subsurface model update [8 ]. Both model parameters (e.g. permeability and porosity) and state variables (e.g. phase saturation and pressure values) can be updated by EnKF. In EnKF, a number of simulations are run in parallel and the unknowns (states and/or parameters) are sequentially updated based on the average response from the different runs and the measured data. Standard implementation of EnKF incorporates time dependent data in an online fashion during the flow simulation as observations become available. Recently, iterative ensemble based methods have attracted a large amount of research effort. Gu and Oliver [3] introduced the ensemble randomized maximum likelihood (EnRML) method that is based on a nonlinear least square formulation. In EnRML, sensitivities are calculated using an ensemble based method. However, for high-dimensional problems the ensemble approximation of the sensitivity matrix is often poor [4]. Li and Reynolds [5] presented two iterative EnKF algorithms that relied on adjoint solutions. In the first algorithm sensitivities were estimated using an adjoint solution from the current data assimilation time back to time zero. In the second algorithm, the sensitivities were estimated using an adjoint solution over a single time step instead of all the way back to time zero. The presented results were superior in comparison to standard EnKF. Krymskaya et al. [6] proposed a straight forward iterative EnKF for both state and parameter estimation. In this method, the mean of the estimated parameters at the end of the EnKF were used to initialize the ensemble for the next iteration. Interestingly, when re-running the filter, the mean estimator for the initial guess was updated but the background error covariance was not changed during the iteration. Lorentzen and Nvdal [7] presented an iterative EnKF method where an early stopping criteria was introduced. A likelihood function was evaluated for each ensemble member and members are updated only if the current likelihood is higher than or equal to the likelihood value at the previous iteration. The algorithm main iteration is terminated if all the members are not updated. This stopping criteria, provided a balance between the prior information and observations overfitting. Sakov et al. [8] presented an iterative EnKF algorithm for strongly nonlinear problems. The iterative EnKF algorithm has many similarities with EnRML [3]. However, an ensemble square root filter was used within the iterative algorithm. Similar to EnRML, the update equation included a term that penalizes the iterative procedure by including the prior information. As noted by the authors [8], the evaluation of the background error covariance is technically difficult in the iterative process and a standard formula that relies on the linearity of the solution was used. Chen and Oliver [4] presented an ensemble Kalman smoother based on EnRML where all the data are assimilated at once. The estimation of the sensitivities matrix based on sampling required large ensembles to provide reliable estimates. Emericka and Reynolds [9] proposed using multiple data assimilation cycles in an iterative method. However, the error measurement covariance was heuristically inflated by the number of data assimilations. A mathematical justification for the linear case with Gaussian prior was presented. For the nonlinear case, no mathematical justification was presented but it was argued that performing successive smaller updates instead of one single large update results in a better parameter estimates. The idea of inflating the measurement covariance matrix and applying multiple data assimilation cycles was also proposed by Bergemann and Reich [2]. In this paper, a new iterative stochastic ensemble method (ISEM) for nonlinear parameter estimation is presented. The theoretical derivation of ISEM is different from Kalman based methods. The algorithm relies on estimating gradients stochastically using an ensemble of directional derivatives. These directional derivatives are used in a Gauss Newton iteration after applying regularization similar to the Levenberg Marquardt method [2,22]. The resulting update equation resembles the update step in EnKF, however the measurement error covariance matrix does not appear in the formula to regularized the inversion of the output covariance matrix. Instead, the inverse of the output covariance matrix is regularized by standard techniques to filter out spurious correlations and to solve the rank deficiency problem. A number of regularization techniques are evaluated. Truncated singular value decomposition (TSVD) [23] and Tikhonov regularization are applied. In addition, we evaluate a set of shrinkage based covariance estimators. These estimators are easy to compute and show some desirable properties. The resulting algorithm combining ISEM and standard regularization techniques offers a flexible and efficient alternative to gradient based optimization methods. The proposed iterative stochastic ensemble method (ISEM) is evaluated on a set of inverse problems modeling multiphase flow in a subsurface porous media. The unknown fields (hydraulic conductivity in the numerical test cases) are parameterized using the Karhunen Loève dimension reduction technique [25 27]. The parameter estimation problem is then solved by ISEM. The efficiency of the proposed algorithm is evident in the size of ensembles used in the presented numerical testing as it is an order of magnitude lower than any previously published results. These small ensembles enable extensive search space exploration for further uncertainty quantification studies. The main contributions of the current work are: An iterative stochastic ensemble method (ISEM) for nonlinear parameter estimation problems is formulated based on an ensemble of directional derivatives and a Gauss Newton update iteration. The proposed algorithm does not rely on the Gaussianity of estimated parameters and can be considered as a generalization to the update step in the randomized EnKF. The proposed algorithm relies on the inverse of the output covariance matrix. For that step, different covariance regularization methods are investigated. Truncated singular value decomposition and Tikhonov regularization are applied with adaptive estimation of the regularization parameter. Covariance estimation using the Ledoit and Wolf (LW), Roa Blackwell Ledoit and Wolf (RBLW) estimator and the oracle approximating shrinkage (OAS) estimator [24] are applied for estimating the covariance matrices. The application of the OAS covariance estimator is the first reported results for nonlinear parameter estimation problems.

3 698 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) A convergence study is performed to compare the different regularization methods on several subsurface flow models. The results from this study can be extended to different ensemble methods where convergence studies might be hard to perform. The organization of this paper is as follows: In Section 2, the derivation of the stochastic ensemble method is presented. Section 3 provides a simple description of regularization techniques applied to the estimated covariance matrices. A pseudo code of the algorithm is also presented. Section 4 includes a brief formulation of the subsurface flow problem followed a brief description of a dimension reduction technique based on Karhunen Loève expansion. In Section 5, we apply the proposed ISEM on several nonlinear parameter estimation problems simulating the multiphase flow in a subsurface model. The conclusions of the current work are drawn in Section Method derivation The proposed parameter estimation algorithm relies on the Gauss Newton method, the definition of directional derivatives, and stochastic estimation of the derivatives using an ensemble based method. Assuming the numerical simulator as a multi-output multi-output nonlinear function, the simulator output for a given set of input parameters x is defined as y ¼HðxÞ, where H represents the simulator nonlinear mapping function. Given a set of observations y obs, one is interested in finding a set of parameters x est that minimizes the squared error function OðxÞ ¼ ð 2 y obs HðxÞÞ > R ðy obs HðxÞÞ; ðþ where O is the objective function and R is the output error covariance matrix. The least squares nature of the objective function enables the definition of a mismatch function FðxÞ ¼R =2 ðy obs HðxÞÞwhere FðxÞ is of size n o and n o is the number of observations. With this formalization, one is interested in solving the following optimization problem arg min x 2 kfðxþk2 : ð2þ The gradient vector GðxÞ ¼r 2 kfðxþk2 ¼ rfðxþ > FðxÞ, where the Jacobian rfðxþ have the j F i ðxþ. The general strategy when solving non-linear optimization problems is to solve a sequence of approximations to the original problem [28]. At each iteration, a correction Dx to the vector x is estimated. For non-linear least squares, an approximation can be constructed by using the linearization Fðx þ DxÞ FðxÞ þrfðxþdx, which leads to the following linear least squares problem min Dx 2 krfðxþdx þfðxþk2 : ð3þ Solving Eq. (3) is equivalent to solving the normal equation rfðx k Þ > rfðx k Þ ð xkþ x k Þ ¼ rfðx k Þ > Fðx k Þ; ð4þ where x k is the current value at iteration k. A Newton like iterative update equation easily follows as x kþ ¼ x k rfðx k Þ > rfðx k Þ rfðx k Þ > Fðx k Þ: ð5þ In this equation, the inverse of the matrix rfðx k Þ > rfðx k Þ may require some sort of regularization similar to the Levenberg Marquardt method [2,22]. In the current manuscript, we apply different regularization techniques with automatic selection of the regularization parameters. For high dimensional search spaces, the evaluation of the gradient rfðx k Þ with simple differencing methods is not feasible. The gradient can be evaluated efficiently using adjoint code but it is not always available for many numerical simulators. Here, we utilize directional derivatives in random direction vector u, defined as r u FðxÞ ¼ Fðx þ huþ FðxÞ ; ð6þ h where h is the step size. The directional derivative is related to the standard derivative by r u FðxÞ ¼rFðxÞu; where r u FðxÞ is of size n o ; rfðxþ is of size n o n x ; u is of size n x and n x is the size of the search space. In all subsequent formulations we assume a unit step size h. 2.. Iterative stochastic ensemble method Directional derivatives are utilized within a stochastic ensemble method for parameter estimation. We use an ensemble of perturbations to approximate the standard derivative (gradient) from an ensemble of directional derivatives as ð7þ

4 r U FðxÞ ¼rFðxÞU; where r U FðxÞ is an ensemble of directional derivatives of size n o n e where n e is the ensemble size and U is the perturbation matrix of size n x n e utilized in estimating the directional derivatives. Multiply both sides from the right side by U > one gets ðr U FðxÞÞU > ¼ rfðxþðuu > Þ from which, the standard derivative can be evaluated as A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) ð8þ ð9þ rfðxþ ¼ðr U FðxÞÞU > ðuu > Þ : For each ensemble member i, the directional derivative around x k is ðr U Fðx k ÞÞ i ¼ R =2 ðhðx k þ u i Þ Hðx k ÞÞ; ðþ ðþ where u i is a zero mean random perturbation in all components of x. For the ensemble of directional derivatives, we can rewrite the directional derivative matrix as r U Fðx k Þ¼ R =2 Y; where Y is of size n o n e and each column i corresponds to ðhðx k þ u i Þ Hðx k ÞÞ. The matrix form of Eq. () is then rfðxþ ¼ R =2 YU > ðuu > Þ : Back-substitution of this ensemble based approximate derivative in Eq. (5) results in x kþ ¼ x k þððr =2 YU > ðuu > Þ Þ > ðr =2 YU > ðuu > Þ ÞÞ ðr =2 YU > ðuu > Þ Þ > Fðx k Þ: Further simplification results in ð2þ ð3þ ð4þ x kþ ¼ x k þðuu > ÞððR =2 YU > Þ > ðr =2 YU > ÞÞ ðr =2 YU > Þ > Fðx k Þ: To simplify the equations further, we use Moore Penrose pseudoinverse by applying truncated singular value decomposition on both U and Y and only retaining the non-zero singular values. This enables distributing the inverse operator on nonsquare matrices as following x kþ ¼ x k þðuu > ÞðR =2 YU > Þ þ ðr =2 YU > Þ þ> ðr =2 YU > Þ > Fðx k Þ; where M þ denotes the Moore Penrose pseudoinverse of a matrix M using SVD. The iterative update equation is then simplified to ð5þ ð6þ x kþ ¼ x k þðuu > ÞðR =2 YU > Þ þ Fðx k Þ: Redistributing the Moore Penrose pseudoinverse one gets, x kþ ¼ x k þðuu > ÞU þ> Y þ R =2 Fðx k Þ: Replacing U > by the transpose of ðu þ Þ þ enables the elimination of U > U þ> x kþ ¼ x k þ UY þ R =2 Fðx k Þ: Following that, we use the relation Y > ðyy > Þ þ ¼ Y > ðy þ> Y þ Þ¼Y þ to obtain ð7þ ð8þ ð9þ x kþ ¼ x k þ UY > ðyy > Þ þ R =2 Fðx k Þ: Substituting Fðx k Þ¼R =2 ðy obs Hðx k ÞÞ in Eq. (2) results in x kþ ¼ x k þ UY > ðyy > Þ þ ðy obs Hðx k ÞÞ: Eq. (2) is the main update equation of the proposed iterative stochastic ensemble method (ISEM). This update equations shares some similarities with the update step in EnKF. However, the derivation is based on optimization techniques and relies on Moore Penrose pseudoinverse of the output covariance instead of the standard inverse. 3. Regularization techniques The iterative update Eq. (2) relies on an efficient regularization of the output covariance matrix C yy ¼ YY >. One is particularly interested in stabilizing the inverse of the output covariance matrix such that small singular values do not dominate the solution [23]. The output matrix Y can be factored using singular value decomposition (SVD) as ð2þ ð2þ Y ¼ U y S y V > y ; ð22þ

5 7 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) where U y and V y are orthogonal matrices, satisfying U y U > y ¼ I k, V > y V y ¼ I k ; S y is a diagonal matrix with non-negative entries r P r P P r k P. The inverse of Y can be easily obtained as Y ¼ V y S y U> y. If the diagonal matrix S y contains small singular values, the inverse of these terms will dominate the solution. Thus, a diagonal filtering matrix U is pre-multiplied by the inverse of the S to stabilize the solution and obtain S g y ¼ US y. For truncated singular value decomposition (TSVD) the diagonal filtering factors are defined as ; i ¼ ; 2;...; t; / i ð23þ ; i ¼ t þ ; t þ 2;...; k: The parameter t is the number of SVD components retained in the regularized solution. For the Tikhonov method, the filtering factors are defined as [23] / i r2 i ; i ¼ ;...k; ð24þ r 2 i þ a2 where a is the regularization parameter. The choice of the regularization parameter a in the Tikhonov method has its roots in solving a minimization problem of the following form min kyx bk 2 þ x a 2 kxk 2 ; ð25þ where b is the right hand side vector for the linear system Yx ¼ b and x is the unknowns vector. Both TSVD and Tikhonov methods rely on a hyperparameter, the number of retained components in TSVD and a for Tikhonov method. In the next subsection, we discuss two commonly used methods for automatically adjusting these hyperparameters. Using a regularized (filtered) output matrix Y, e the regularized inverse of the output covariance matrix can be evaluated as ð g C yy Þ ¼ g U ys 2 y U> y ¼ U y U 2 S 2 y U > y : ð26þ The regularization solves the covariance matrix rank deficiency problem with the added cost of calculating SVD for the output matrix Y. However, the size of the observations vector or the ensemble size is usually limited and efficient methods for SVD calculations can be used (see [29, chapter 8]). Truncated SVD regularization in the context of regression is commonly known as principal components regression (PCR) [3]. Tikhonov regularization in the statistics literature is commonly referred to as ridge regression method [3,32]. 3.. Regularization parameter choice The proper choice of the regularization parameter is a matter of choosing the right cut-off for the filtering factors (i.e. the break point in the SVD spectrum where we want to dampen all singular vectors corresponding to singular values below that level). Here, we test two different methods for estimating the regularization parameter, generalized cross validation method and the L-curve method. Generalized cross validation (GCV) [33] relies on dividing the data into two parts. The first part is used for estimation and the second part is used for validation. If one data point is omitted, then a good choice of the regularization parameter will result in a solution that is able to predict the missing data point with good accuracy even if the omitted data point was not used in the solution process. In the context of solving a linear system, if an arbitrary element b i of the right-hand side vector is left out, the corresponding regularized solution should predict this observation with small error. For a certain choice of regularization parameter g (corresponds to a in Tikhonov method and t for TSVD method), the regularized solution can be obtained using x g ¼ e Y b. The optimal parameter g is obtained by minimizing the following GCV functional kyx g bk 2 GðgÞ ¼ traceði n YY e 2 : ð27þ Þ The L-curve method is a simple method for choosing the regularization parameter [34 36]. The L-curve is a log log plot of the norm of the regularized solution kx g k versus the corresponding residual norm kyx g bk 2 for each regularization parameter value. This plot in the log log scale often has the shape of the letter L, from which this method draws its name. The best regularization parameter lies at the corner of the L, since for values higher than this corner value, the residual increases rapidly and the norm of the solution decreases only slowly, while for values smaller than the corner value, the norm of the solution increases rapidly without much decrease in the residual. It is expected that a solution near the corner of the L-shaped curve will balance the regularization and perturbation errors Shrinkage based estimators Another class of covariance regularization techniques is denoted as shrinkage estimators. Shrinkage methods try to approximate the covariance matrix for high dimensional problems with small number of samples. Having a small ensembles

6 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) for computational efficiency results in a rank deficient sample covariance matrix. The sample covariance matrix is an unbiased estimator that coincides with the maximum likelihood estimate when the samples are Gaussian distributed and when it is invertible. However, the sample covariance does not minimize the mean-squared error (MSE). For a problem of p unknowns and n samples, shrinkage covariance estimators commonly have the form g Cyy ¼ð qþc yy þ qf; ð28þ where q is the shrinkage coefficient that takes a value between and, C yy is the sample covariance matrix and F is a well conditioned diagonal matrix. A simple and reliable form of F is F ¼ traceðc yyþ I: ð29þ p Ledoit and Wolf [37,38] proposed a shrinkage estimator for the case of high dimensional problems with small number of samples (i.e. n p). The proposed estimator asymptotically minimizes the mean-square-error (MSE) of the estimated covariance matrix defined as k d C yy g C yy k 2 F, where d C yy is the exact covariance matrix and kk F is the Frobenius norm. The Ledoit and Wolf (LW) estimator produces well conditioned covariance matrices for small sample sizes and the performance advantages are distribution-free and are not restricted to Gaussian assumptions. LW estimator [37,38] is calculated using the following explicit equation q LW ¼ n 2 P n i¼ ky iy > i C yy k 2 F ; ð3þ traceðc 2 yy Þ trace 2 ðc yy Þ=p where y i is the column i from the matrix Y. LW shrinkage parameter is bounded by as q LW ¼ minðq LW ; Þ. Recently, two methods have been proposed for an improved estimation of the shrinkage coefficient [24]. The Roa Blackwell Ledoit and Wolf (RBLW) estimator improves the LW estimator under the Gaussian model. The shrinkage parameter in RBLW method is defined as q RBLW ¼ ððn 2Þ=nÞtraceðC2 yy Þþtrace 2 ðc yy Þ : ð3þ ðn þ 2Þ traceðc 2 yy Þ trace 2 ðc yy Þ=p Similar to the LW estimator the shrinkage parameter is bounded by as following, q RBLW ¼ minðq RBLW ; Þ. Both LW and RBLW were designed to asymptotically achieve a minimum MSE with respect to shrinkage estimators for large sample size. However, for small ensembles there is no guarantee that such optimality still holds. Chen et al. [24] proposed another estimator that minimizes the MSE in the estimated covariance using very small samples n. The new estimator was named the oracle approximating shrinkage (OAS) estimator. Rather than employing asymptotic solutions, Chen et al. [24] utilized an iterative procedure for approximating the oracle. This iterative formula was used to establish the limiting closed form of the OAS shrinkage parameter ð 2=pÞtraceðC 2 yy q OAS ¼ Þþtrace 2 ðc yy Þ : ð32þ ðn þ 2=pÞ traceðc 2 yy Þ trace 2 ðc yy Þ=p Similar to other shrinkage based methods, the shrinkage parameter is bounded by as following, q OAS ¼ minðq OAS ; Þ. Shrinkage based estimators are cheap to evaluate as the shrinkage parameter is estimated using an explicit formulae and no crossvalidation is required. These features makes shrinkage based regularization computationally attractive. 4. Problem formulation We want to reiterate that the proposed algorithm is simple and requires a limited number of input constants that need to be tuned. Different forms of observation data can be included in the observation vector y obs to account for any data that need to be assimilated. In the current algorithm, at each iteration the ensemble members are generated by adding random perturbations as defined in Eq. (). The random perturbations are drawn from the Gaussian distribution Nð; IÞ and are scaled by a scaler defined by a decaying function. In all our numerical testing, we use a logarithmic rule proposed by Kushner [39] for specifying k, the perturbation magnitude as c= logðk þ Þ, where c is a user input and k is the iteration number. Other forms for decaying sequences as proposed by Gelfand and Mitter [4] or by Fang et al. [4] can be used. In order to ensure error reduction, the iterative update Eq. (2) is modified by introducing a step size c that takes an initial value of and is adjusted to ensure error reduction. The modified update equation is x kþ ¼ x k þ c UY > YY > þ ð yobs Hðx k ÞÞ: ð33þ In the numerical testing, c is multiplied by one half if no error reduction is achieved. This is repeated for up to 5 times and if no error reduction is achieved, the current iteration of the stochastic algorithm is skipped and another ensemble is generated starting from the parameter values in the previous iteration. An advanced step size selection based using Wolfe or Goldstein

7 72 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) condition could be applied within a line search strategy [28]. The outline of the proposed algorithm for parameter estimation is listed in Algorithm 4.. Subsurface flow problem formulation A two-phase immiscible flow in a heterogeneous porous subsurface region is considered. For clarity of exposition, gravity and capillary effects are neglected. However, the proposed model calibration algorithm is independent of the selected physical mechanisms. The two phases will be referred to as water with the subscript w for the aqueous phase and oil with the subscript o for the non-aqueous phase. This subsurface flow problem is described by the mass conservation equation and Darcy s law r v t ¼ q; v t ¼ Kk t ðs w Þrp over X; ð34þ where v t is the total Darcy velocity of the engaging fluids, q ¼ Q o =q o þ Q w =q w is the normalized source or sink term, K is the absolute permeability tensor, S w is the water saturation, k t ðs w Þ¼k w ðs w Þþk o ðs w Þ is the total mobility and p ¼ p o ¼ p w is the pressure. In which, q w ; q o are the water and oil fluid densities, respectively. These equations can be combined to produce the pressure equation r ðkk t ðs w ÞrpÞ ¼ q: ð35þ The pore space is assumed to be filled with fluids and thus the sum of the fluid saturations should add up to one (i.e., S o þ S w ¼ ). Then, only the water saturation equations is solved w þ r f ðs wþv t Þ ¼ Q w q w ; where / is the porosity, f ðs w Þ¼k w =k t is the fractional flow function. The relative mobilities are modeled using polynomial equations of the form k w ðs w Þ¼ ðs nwþ 2 l w ; k o ðs w Þ¼ ð S nwþ 2 l o ; S nw ¼ S w S wc S or S wc ; ð37þ where S wc is the connate or irreducible water saturation, S or is the irreducible oil saturation and l w ; l o are the water and oil fluid viscosities, respectively. The pressure Eq. (35) is discretized using standard two-point flux approximation (TPFA) method and the saturation Eq. (36) is discretized using a finite-volume scheme and solved implicitly by a standard Newton Raphson iteration [42]. For simplicity, we limit the parameter estimation to the subsurface permeability map K. We also assume this permeability field as a lognormal random variable as it is usually heterogeneous and shows a high range of variability Parameterization using Karhunen Loève expansion The unknown permeability field is parameterized by Karhunen Loève expansion (KLE) [25 27]. KLE is a classical method for quantization of Gaussian random vectors. It is also known as proper orthogonal decomposition (POD) or principal component analysis (PCA) in the finite dimensional case [43]. The unknown field is formulated as real-valued random field K with mean lðxþ and a covariance function Cðx ; x 2 Þ. Let Kðx; nþ be a function of the position vector x defined over the ð36þ

8 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) problem domain and n belonging to space of random events. Karhunen Loève expansion provides a Fourier-like series form of Kðx; nþ as Kðx; nþ ¼lðxÞþ X k¼ pffiffiffiffi k knk w k ðxþ; ð38þ where n k is a set of random variables, k k is a set of real constants and w k ðxþ are an orthonormal set of deterministic functions. The covariance function C is symmetric and positive semidefinite and has the spectral decomposition: Cðx ; x 2 Þ¼ X k k w k ðx Þw k ðx 2 Þ; k¼ where k k > are the eigenvalues, w k are the corresponding eigenvectors. Different realizations can be generated for different values of n k. In the current manuscript, the values of n k are estimated using ISEM such that the measured production data matches the simulation results. ð39þ 5. Numerical evaluation The reference permeability fields for test Problems and 2 are shown in Fig. (b) and (c), respectively, where both fields represent channelized models. These fields are binary field, as the spatial properties are either the background material or the channel material. Different realizations of channelized models are generated using the Stanford Geostatistical modeling software, S-GeMS [44] based on the training image shown in Fig. (a). The training image shown in Fig. (a) is based on a similar example published in [45]. Fig. 2 shows ten different unconstrained realizations generated by S-GeMS. A total of one thousand different realizations were generated and the mean and covariance of these realizations were used as an (a) (b) (c) Fig.. Details of the first and second parameter estimation problems. Part (a) shows the training image for the log-permeability field (in Darcy). Part (b) shows the reference log-permeability field for test Problem. Part (c) shows the reference log-permeability field for test Problem 2. Fig. 2. Different realizations of the unconditioned log-permeability field obtained by the SNESIM algorithm as implemented in S-GeMS [44].

9 74 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) Fig. 3. The first 5 basis functions of the search space for test Problems and 2 extracted using KLE. (a) (b) Fig. 4. Injection and production well patterns (black dot for injection wells and white dot for production wells). Part (a) shows injection/production pattern and part (b) shows injection/production pattern 2. input for KLE to produce the search space basis functions. The first 5 basis functions are shown in Fig. 3. In all test cases, the first leading terms of the KLE are used as basis functions for the search space. The proposed ISEM for nonlinear parameter estimation is not limited to KLE parameterization and can be applied to kernel based parameterized fields. Kernel based methods are considered to be more suitable for representing permeability fields that follows non-gaussian distributions [46]. The discretized model uses a 2D regular grid of 5 5 blocks in the x and y directions, respectively. The size of each grid block is m in each direction and a unit thickness in the z direction. The porosity is assumed to be constant in all grid blocks and equals :2. The water viscosity l w is set to :3 cp and the oil viscosity l o is set to 3 cp. The irreducible water saturation and irreducible oil saturation are set as S or ¼ S wc ¼ :2 and the simulation is run until pore volume is injected. For the two reference permeability fields, two injection-production patterns are used. Fig. 4 shows the location of injection wells as a black dots and the production wells as white dots. For pattern, shown in Fig. 4(a), the inverted five spot pattern is used. For pattern 2, shown in Fig. 4(b), 9 production wells are distributed around 4 injection wells. For the parameter

10 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) estimation problem, the production curves at the production wells are used to define the misfit function and guide the inverse problem solution. Each water-cut curve was sampled at 5 points and these samples were used for calculating the errors and the update equation. The observation data (water-cut values) is perturbed with uncorrelated white noise with a small standard deviation of 6 to be able to perform a convergence study and compare the different regularization techniques. 5.. Test case Fig. 5 shows a complete convergence study of the proposed algorithm for test case and for injection-production wells pattern. The root mean square error (RMSE) in the water cut curve versus the number of forward runs is plotted for ensembles of 5,, 2 and 25 members in parts a to d, respectively. For each case, eight different covariance regularization techniques are utilized. The shrinkage Ledoit and Wolf estimator (denoted as sh-lw), Roa Blackwell Ledoit and Wolf (RBLW) estimator (denoted as sh-rblw), the oracle approximating shrinkage (OAS) estimator (denoted as sh-oas), Trun (a) (b) (c) (d) Fig. 5. RMSE of water fractional flow curve versus number of forward runs for test case and injection/production pattern, (a) ensemble of 5 members, (b) ensemble of members, (c) ensemble of 2 members and (d) ensemble of 25 members.

11 76 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) cated SVD while retaining 99% of the variance (denoted as TSVD-99), TSVD while estimating the filtering parameter using the L-curve method (denoted as TSVD-lcurve), TSVD while estimating the filtering parameter using the generalized cross validation method (denoted as TSVD-cv), Tikhonov regularization combined with L-curve method for specifying the filtering parameter (denoted as Tikh-lcurve) and Tikhonov regularization combined with generalized cross validation for specifying the filtering parameter (denoted as Tikh-cv). All the results are an average of 8 different runs starting from the mean field obtained by KLE of the different realizations. For the case of ensemble of 5 members, Tikhonov regularization produced the fastest error reduction while TSVD with generalized cross validation was the slowest in terms of error reduction. However, regardless of the utilized regularization method, ISEM did produce significant error reduction after 5 forward runs. For an ensemble of members, all the methods performed relatively well. We observe that the RMSE after forward runs using an ensemble of members is much lower than the one achieved by an ensemble of 5 members. For the ensembles with 2 and 25 members the difference between the regularization methods become clear. The regularization parameter based on generalized cross validation with Tikhonov or TSVD methods performed poorly in comparison to other methods. The results also show that shrinkage based estimators are performing well in comparison to computationally expensive methods based on singular value decomposition. Fig. 6 shows a sample of the fitted water fraction flow curves at the production wells before and after running the calibration algorithm. The results are shown in terms of dimensionless time defined by the pore volume injected (PVI). Four production wells exist in pattern. The case shown in Fig. 6 is for an ensemble of 2 members and the covariance matrices Water Cut Ensemble Prior Reference Pore Volume Injected (PVI) (a) Water Cut Optimized Ensemble Reference Pore Volume Injected (PVI) (b) Fig. 6. Water cut curves at the production wells for an ensemble of 2 members using oracle approximating shrinkage (OAS) estimator for test case and injection/production pattern. Part (a) Initial water cut curves. Part (b) Fitted water cut curves (a) (b) (c) Fig. 7. Calibrated log-permeability field using ISEM combined with the oracle approximating shrinkage (OAS) estimator for test case and injection/ production pattern, (a) ensemble of 5 members, (b) ensemble of members, (c) ensemble of 2 members.

12 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) were estimated using the OAS estimator. A good match is observed after forward runs. Fig. 7 shows the calibrated logpermeability fields (in Darcy) using different ensemble sizes using the OAS estimator. These plots show some similarities with the reference field, however this is not always the case as the problem is ill-conditioned and might admit many different solutions. The results from this example show that the proposed method for integrating dynamic data into the model is successful for conditioning the permeability maps to all measured data points. The automatic selection of the regularization parameter at each iteration eliminates the need for heuristic tuning. Fig. 8 shows the root mean square error (RMSE) in the water cut curve versus the number of forward runs for example under the injection/production wells pattern 2. Different covariance regularization methods are compared and different ensemble sizes are utilized. For this pattern, the calibration data is spatially distributed over 9 production wells. For the different ensemble sizes, there are two major observations: Tikhonov regularization with L-curve method performed generally well and TSVD with generalized cross validation performed poorly in comparison to other methods. The results of shrinkage based estimators for this example are good but less emphatic than the previous test case. This could be attributed to Gaussianity assumptions used to derive the explicit formula of these shrinkage estimators. Fig. 9 shows a sample of the water fraction flow curves at the production wells before and after applying the parameter estimation algorithm. For the shown case, Tikhonov regularization with the L-curve method and an ensemble of parameters were used. The (a) (b) (c) (d) Fig. 8. RMSE of water fractional flow curve versus number of forward runs for test case and injection/production pattern 2, (a) ensemble of 5 members, (b) ensemble of members, (c) ensemble of 2 members and (d) ensemble of 25 members.

13 78 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) Water Cut Ensemble Prior Reference Pore Volume Injected (PVI) (a) Water Cut Optimized Ensemble Reference Pore Volume Injected (PVI) (b) Fig. 9. Water cut curves at the production wells for an ensemble of members using Tikhonov regularization with L-curve method for test case and injection/production pattern 2. Part (a) initial water cut curves. Part (b) fitted water cut curves (a) (b) (c) Fig.. Calibrated log-permeability field using ISEM combined with Tikhonov regularization utilizing the L-curve method for test case and injection/ production pattern 2, (a) ensemble of 5 members, (b) ensemble of members, (c) ensemble of 2 members. matching of the calibration date is not perfect but the general trends of the production curves are detected. The number of forward runs was limited to forward runs and increasing that limit might produce a perfect match. Fig. shows the calibrated log-permeability fields (in Darcy) using different ensemble sizes while using Tikhonov regularization with the L-curve method. The obtained permeability fields are different and this shows that different ensembles sizes have been attracted to different local solutions. This is expected as the problem is ill-posed and complete search space exploration using different starting points might be needed Test case 2 In this test case, the reference permeability map is shown in Fig. (c). The convergence of the RMSE for the water cut curves versus the number of forward runs for injection/production wells pattern is shown in Fig.. Different covariance regularization methods are compared and different ensemble sizes are utilized. For small ensemble sizes, the performance of the different methods is comparable. However, for the ensembles of size 2 and 25 members, the regularization parameter estimation based on generalized cross validation has clear disadvantage in terms of error reduction. This is consistent with the results from the previous example. The results show the good performance of shrinkage based estimators especially for larger ensemble sizes. Comparing the results from small ensembles with those obtained with larger ensembles

14 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) (a) (b) (c) (d) Fig.. RMSE of water fractional flow curve versus number of forward runs for test case 2 and injection/production pattern, (a) ensemble of 5 members, (b) ensemble of members, (c) ensemble of 2 members and (d) ensemble of 25 members. we observe that small ensembles produce significant error reduction in the few first iterations and then fail in further error reduction. However, larger ensembles manage to reduce the errors at all stages of the calibration algorithm. At the early stages of the parameter estimation algorithm the accuracy of the estimated search direction is not essential for error reduction and thus smaller ensembles showed good performance. However, at later stages in the parameter estimation algorithm, an accurate search direction is needed for further error reduction. An adaptive algorithm where small ensembles are initially used followed by larger ensemble is expected to produce a more efficient parameter estimation method. Fig. 2 shows a sample of the water-cut curves at the production wells before and after applying the parameter estimation algorithm. Roa Blackwell Ledoit and Wolf (RBLW) estimator was used for regularizing the output covariance matrix and an ensemble of members was utilized. The production data at the four wells are fully matched at the end of the parameter estimation algorithm. Fig. 3 shows the corresponding calibrated log-permeability fields (in Darcy) using the RBLW estimator with different ensemble sizes. For the injection pattern 2, Fig. 4 shows the RMSE in the water cut curve versus the number of forward runs. Eight different covariance regularization methods are used. TSVD-gcv was the worst

15 7 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) Water Cut Ensemble Prior Reference Pore Volume Injected (PVI) (a) Water Cut Optimized Ensemble Reference Pore Volume Injected (PVI) (b) Fig. 2. Water cut curves at the production wells for an ensemble of members using Roa Blackwell Ledoit and Wolf (RBLW) estimator for test case 2 and injection/production pattern. Part (a) initial water cut curves. Part (b) fitted water cut curves (a) (b) (c) 2 3 Fig. 3. Calibrated log-permeability field using ISEM combined with the Roa Blackwell Ledoit and Wolf (RBLW) estimator for test case 2 and injection/ production pattern, (a) ensemble of 5 members, (b) ensemble of members, (c) ensemble of 2 members. method in comparison to other methods in terms of error reduction. Tikhonov regularization with L-curve was within the top two best performing methods. The performance of shrinkage based estimators is not as good as for the case of injection/production pattern. This is attributed to the difficulty of solving the inverse problem as the calibration data is spatially distributed over the entire domain. Fig. 5 shows a sample of the water cut curves at the production wells before and after running the calibration algorithm. Tikhonov regularization with L-curve method was used for evaluating the regularized inverse of the output covariance matrix. The fitted curves reasonably match the reference solution for this difficult problem. Fig. 6 shows the calibrated log-permeability fields (in Darcy) using different ensemble sizes Discussion of the proposed method The update equation in the proposed iterative stochastic ensemble method resembles the update equation in EnKF. However, in EnKF the inverse of the output covariance matrix is augmented with the output error covariance. Adding this matrix to the output covariance can be considered as a Bayesian regularization term. This is to be contrasted with the proposed ISEM update equation, where regularization is performed automatically by standard techniques. The different iterative ensemble based methods [3,5 9,47], reviewed in the introduction, have many commonalities and relied on almost the same assumptions. However, the difference between these methods are noticed in how the uncertainties are updated and how regularization is applied. The problem of updating uncertainties for nonlinear prob-

16 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) (a) (b) (c) (d) Fig. 4. RMSE of water fractional flow curve versus number of forward runs for test case 2 and injection/production pattern 2, (a) ensemble of 5 members, (b) ensemble of members, (c) ensemble of 2 members and (d) ensemble of 25 members. lems is challenging. For problems with high dimensional search space, the curse of dimensionality implies that the support of the posterior PDF is likely to be much smaller than the prior. This increases the probability of missing high probability regions of the posterior PDF. On the other hand, the repeated use of the calibration data in an iterative algorithm will result in an underestimation of uncertainty or what is called data overfitting [48]. The major question is how to balance the underestimation and the overestimation of uncertainties and how that affects the convergence of the solution. Different iterative algorithms tried to deal with this problem by early stopping of the iterative update [7] or by including the prior in the Kalman update equation [3,8] or by not updating the error covariance in the iterative scheme [6]. The second problem is related to the regularization of the Kalman gain matrix to avoid divergence. Generally speaking, including the prior in the objective function introduces some sort of regularization for an ill-posed problem as some preference is given to solutions that are close to the prior. However, regularization by TSVD penalizes the magnitude of the solution and gives a preference to solutions with minimum 2 norm. The proposed ISEM deals with the previously mentioned questions in a different way. First, the proposed algorithm is a parameter estimation algorithm and does not try to update the error covariance. We adopted a standard technique utilized in many stochastic optimization methods where a gain sequence is formulated with a decaying magnitude to describe the

17 72 A.H. Elsheikh et al. / Journal of Computational Physics 242 (23) Water Cut Ensemble Prior Reference Pore Volume Injected (PVI) (a) Water Cut Optimized Ensemble Reference Pore Volume Injected (PVI) (b) Fig. 5. Water cut curves at the production wells for an ensemble of members using Tikhonov regularization with L-curve method for test case 2 and injection/production pattern 2. Part (a) initial water cut curves. Part (b) fitted water cut curves (a) (b) (c) Fig. 6. Calibrated log-permeability field using ISEM combined with Tikhonov regularization utilizing the L-curve method for test case 2 and injection/ production pattern 2, (a) ensemble of 5 members, (b) ensemble of members, (c) ensemble of 2 members. random perturbations [39]. If one is interested in evaluating the parameters uncertainties, Eq. (2) in [49] can be utilized at the converged solution. Alternatively, ISEM can be utilized within the randomized maximum likelihood (RML) framework [5,5], to solve a set of inverse problems with perturbed objective functions. The utilization of a stochastic search algorithm within RML will be similar to the recent work of Li and Reynolds [52]. As for the second point regarding regularization, we note that for the current parameterization using KLE, the parameters prior is centered around zero. This means that regularization using TSVD by minimizing the solution 2 norm is equivalent to Bayesian regularization by including the prior term. Once that is realized, only one type of regularization is applied. This is to be contrasted with methods that includes Bayesian regularization terms in the update equation and follow that by a repeated application of regularization for evaluating sensitivities and for solving the rank deficiency problem of the sample covariance matrix. The repeated application of different regularization steps with different thresholds might result in excessive information loss. In the proposed algorithm, only one regularization operation is applied at each iteration. 6. Conclusions In this paper, an iterative stochastic ensemble method (ISEM) for nonlinear parameter estimation for subsurface flow models was presented. ISEM can be applied to any numerical simulator as a blackbox. The algorithm is iterative and have

Parameter estimation of subsurface flow models using iterative regularized ensemble Kalman filter

Parameter estimation of subsurface flow models using iterative regularized ensemble Kalman filter Stoch Environ Res Risk Assess (13) 7:877 897 DOI 1.17/s77-1-613-x ORIGINAL PAPER Parameter estimation of subsurface flow models using iterative regularized ensemble Kalman filter A. H. ELSheikh C. C. Pain

More information

Application of the Ensemble Kalman Filter to History Matching

Application of the Ensemble Kalman Filter to History Matching Application of the Ensemble Kalman Filter to History Matching Presented at Texas A&M, November 16,2010 Outline Philosophy EnKF for Data Assimilation Field History Match Using EnKF with Covariance Localization

More information

PRECONDITIONING MARKOV CHAIN MONTE CARLO SIMULATIONS USING COARSE-SCALE MODELS

PRECONDITIONING MARKOV CHAIN MONTE CARLO SIMULATIONS USING COARSE-SCALE MODELS PRECONDITIONING MARKOV CHAIN MONTE CARLO SIMULATIONS USING COARSE-SCALE MODELS Y. EFENDIEV, T. HOU, AND W. LUO Abstract. We study the preconditioning of Markov Chain Monte Carlo (MCMC) methods using coarse-scale

More information

New Fast Kalman filter method

New Fast Kalman filter method New Fast Kalman filter method Hojat Ghorbanidehno, Hee Sun Lee 1. Introduction Data assimilation methods combine dynamical models of a system with typically noisy observations to obtain estimates of the

More information

Gaussian Filtering Strategies for Nonlinear Systems

Gaussian Filtering Strategies for Nonlinear Systems Gaussian Filtering Strategies for Nonlinear Systems Canonical Nonlinear Filtering Problem ~u m+1 = ~ f (~u m )+~ m+1 ~v m+1 = ~g(~u m+1 )+~ o m+1 I ~ f and ~g are nonlinear & deterministic I Noise/Errors

More information

A Note on the Particle Filter with Posterior Gaussian Resampling

A Note on the Particle Filter with Posterior Gaussian Resampling Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and

More information

Characterization of reservoir simulation models using a polynomial chaos-based ensemble Kalman filter

Characterization of reservoir simulation models using a polynomial chaos-based ensemble Kalman filter WATER RESOURCES RESEARCH, VOL. 45,, doi:10.1029/2008wr007148, 2009 Characterization of reservoir simulation models using a polynomial chaos-based ensemble Kalman filter George Saad 1 and Roger Ghanem 1

More information

Parameter Estimation in Reservoir Engineering Models via Data Assimilation Techniques

Parameter Estimation in Reservoir Engineering Models via Data Assimilation Techniques Parameter Estimation in Reservoir Engineering Models via Data Assimilation Techniques Mariya V. Krymskaya TU Delft July 6, 2007 Ensemble Kalman Filter (EnKF) Iterative Ensemble Kalman Filter (IEnKF) State

More information

Ensemble Kalman filter assimilation of transient groundwater flow data via stochastic moment equations

Ensemble Kalman filter assimilation of transient groundwater flow data via stochastic moment equations Ensemble Kalman filter assimilation of transient groundwater flow data via stochastic moment equations Alberto Guadagnini (1,), Marco Panzeri (1), Monica Riva (1,), Shlomo P. Neuman () (1) Department of

More information

The Inversion Problem: solving parameters inversion and assimilation problems

The Inversion Problem: solving parameters inversion and assimilation problems The Inversion Problem: solving parameters inversion and assimilation problems UE Numerical Methods Workshop Romain Brossier romain.brossier@univ-grenoble-alpes.fr ISTerre, Univ. Grenoble Alpes Master 08/09/2016

More information

Generalized Randomized Maximum Likelihood

Generalized Randomized Maximum Likelihood Generalized Randomized Maximum Likelihood Andreas S. Stordal & Geir Nævdal ISAPP, Delft 2015 Why Bayesian history matching doesn t work, cont d Previous talk: Why Bayesian history matching doesn t work,

More information

Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method

Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method Jiachuan He University of Texas at Austin April 15, 2016 Jiachuan

More information

A Stochastic Collocation based. for Data Assimilation

A Stochastic Collocation based. for Data Assimilation A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

CS540 Machine learning Lecture 5

CS540 Machine learning Lecture 5 CS540 Machine learning Lecture 5 1 Last time Basis functions for linear regression Normal equations QR SVD - briefly 2 This time Geometry of least squares (again) SVD more slowly LMS Ridge regression 3

More information

A nested sampling particle filter for nonlinear data assimilation

A nested sampling particle filter for nonlinear data assimilation Quarterly Journal of the Royal Meteorological Society Q. J. R. Meteorol. Soc. : 14, July 2 A DOI:.2/qj.224 A nested sampling particle filter for nonlinear data assimilation Ahmed H. Elsheikh a,b *, Ibrahim

More information

STA414/2104 Statistical Methods for Machine Learning II

STA414/2104 Statistical Methods for Machine Learning II STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements

More information

Dynamic Data Driven Simulations in Stochastic Environments

Dynamic Data Driven Simulations in Stochastic Environments Computing 77, 321 333 (26) Digital Object Identifier (DOI) 1.17/s67-6-165-3 Dynamic Data Driven Simulations in Stochastic Environments C. Douglas, Lexington, Y. Efendiev, R. Ewing, College Station, V.

More information

Linear Models 1. Isfahan University of Technology Fall Semester, 2014

Linear Models 1. Isfahan University of Technology Fall Semester, 2014 Linear Models 1 Isfahan University of Technology Fall Semester, 2014 References: [1] G. A. F., Seber and A. J. Lee (2003). Linear Regression Analysis (2nd ed.). Hoboken, NJ: Wiley. [2] A. C. Rencher and

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information

Nonlinear Model Reduction for Uncertainty Quantification in Large-Scale Inverse Problems

Nonlinear Model Reduction for Uncertainty Quantification in Large-Scale Inverse Problems Nonlinear Model Reduction for Uncertainty Quantification in Large-Scale Inverse Problems Krzysztof Fidkowski, David Galbally*, Karen Willcox* (*MIT) Computational Aerospace Sciences Seminar Aerospace Engineering

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures

More information

Linear Methods for Regression. Lijun Zhang

Linear Methods for Regression. Lijun Zhang Linear Methods for Regression Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Introduction Linear Regression Models and Least Squares Subset Selection Shrinkage Methods Methods Using Derived

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Nonparametric Bayesian Methods (Gaussian Processes)

Nonparametric Bayesian Methods (Gaussian Processes) [70240413 Statistical Machine Learning, Spring, 2015] Nonparametric Bayesian Methods (Gaussian Processes) Jun Zhu dcszj@mail.tsinghua.edu.cn http://bigml.cs.tsinghua.edu.cn/~jun State Key Lab of Intelligent

More information

for Complex Environmental Models

for Complex Environmental Models Calibration and Uncertainty Analysis for Complex Environmental Models PEST: complete theory and what it means for modelling the real world John Doherty Calibration and Uncertainty Analysis for Complex

More information

Adaptive Data Assimilation and Multi-Model Fusion

Adaptive Data Assimilation and Multi-Model Fusion Adaptive Data Assimilation and Multi-Model Fusion Pierre F.J. Lermusiaux, Oleg G. Logoutov and Patrick J. Haley Jr. Mechanical Engineering and Ocean Science and Engineering, MIT We thank: Allan R. Robinson

More information

Optimisation under Uncertainty with Stochastic PDEs for the History Matching Problem in Reservoir Engineering

Optimisation under Uncertainty with Stochastic PDEs for the History Matching Problem in Reservoir Engineering Optimisation under Uncertainty with Stochastic PDEs for the History Matching Problem in Reservoir Engineering Hermann G. Matthies Technische Universität Braunschweig wire@tu-bs.de http://www.wire.tu-bs.de

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

11280 Electrical Resistivity Tomography Time-lapse Monitoring of Three-dimensional Synthetic Tracer Test Experiments

11280 Electrical Resistivity Tomography Time-lapse Monitoring of Three-dimensional Synthetic Tracer Test Experiments 11280 Electrical Resistivity Tomography Time-lapse Monitoring of Three-dimensional Synthetic Tracer Test Experiments M. Camporese (University of Padova), G. Cassiani* (University of Padova), R. Deiana

More information

A008 THE PROBABILITY PERTURBATION METHOD AN ALTERNATIVE TO A TRADITIONAL BAYESIAN APPROACH FOR SOLVING INVERSE PROBLEMS

A008 THE PROBABILITY PERTURBATION METHOD AN ALTERNATIVE TO A TRADITIONAL BAYESIAN APPROACH FOR SOLVING INVERSE PROBLEMS A008 THE PROAILITY PERTURATION METHOD AN ALTERNATIVE TO A TRADITIONAL AYESIAN APPROAH FOR SOLVING INVERSE PROLEMS Jef AERS Stanford University, Petroleum Engineering, Stanford A 94305-2220 USA Abstract

More information

Regularization via Spectral Filtering

Regularization via Spectral Filtering Regularization via Spectral Filtering Lorenzo Rosasco MIT, 9.520 Class 7 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

ECE295, Data Assimila0on and Inverse Problems, Spring 2015 ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch

More information

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236 v1 28 Nov 2005 Brian R. Hunt Institute for Physical Science and Technology and Department

More information

Non-polynomial Least-squares fitting

Non-polynomial Least-squares fitting Applied Math 205 Last time: piecewise polynomial interpolation, least-squares fitting Today: underdetermined least squares, nonlinear least squares Homework 1 (and subsequent homeworks) have several parts

More information

MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION

MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION P. DOSTERT, Y. EFENDIEV, AND T.Y. HOU Abstract. In this paper, we study multiscale

More information

Machine Learning Applied to 3-D Reservoir Simulation

Machine Learning Applied to 3-D Reservoir Simulation Machine Learning Applied to 3-D Reservoir Simulation Marco A. Cardoso 1 Introduction The optimization of subsurface flow processes is important for many applications including oil field operations and

More information

Linear regression methods

Linear regression methods Linear regression methods Most of our intuition about statistical methods stem from linear regression. For observations i = 1,..., n, the model is Y i = p X ij β j + ε i, j=1 where Y i is the response

More information

Spectral Regularization

Spectral Regularization Spectral Regularization Lorenzo Rosasco 9.520 Class 07 February 27, 2008 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

Assessing the Value of Information from Inverse Modelling for Optimising Long-Term Oil Reservoir Performance

Assessing the Value of Information from Inverse Modelling for Optimising Long-Term Oil Reservoir Performance Assessing the Value of Information from Inverse Modelling for Optimising Long-Term Oil Reservoir Performance Eduardo Barros, TU Delft Paul Van den Hof, TU Eindhoven Jan Dirk Jansen, TU Delft 1 Oil & gas

More information

DATA ASSIMILATION FOR FLOOD FORECASTING

DATA ASSIMILATION FOR FLOOD FORECASTING DATA ASSIMILATION FOR FLOOD FORECASTING Arnold Heemin Delft University of Technology 09/16/14 1 Data assimilation is the incorporation of measurement into a numerical model to improve the model results

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Ensemble Kalman filter for automatic history matching of geologic facies

Ensemble Kalman filter for automatic history matching of geologic facies Journal of Petroleum Science and Engineering 47 (2005) 147 161 www.elsevier.com/locate/petrol Ensemble Kalman filter for automatic history matching of geologic facies Ning LiuT, Dean S. Oliver School of

More information

Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides

Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides Intelligent Data Analysis and Probabilistic Inference Lecture

More information

Computer Vision Group Prof. Daniel Cremers. 3. Regression

Computer Vision Group Prof. Daniel Cremers. 3. Regression Prof. Daniel Cremers 3. Regression Categories of Learning (Rep.) Learnin g Unsupervise d Learning Clustering, density estimation Supervised Learning learning from a training data set, inference on the

More information

L11: Pattern recognition principles

L11: Pattern recognition principles L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Robust Ensemble Filtering With Improved Storm Surge Forecasting

Robust Ensemble Filtering With Improved Storm Surge Forecasting Robust Ensemble Filtering With Improved Storm Surge Forecasting U. Altaf, T. Buttler, X. Luo, C. Dawson, T. Mao, I.Hoteit Meteo France, Toulouse, Nov 13, 2012 Project Ensemble data assimilation for storm

More information

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 1215 A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing Da-Zheng Feng, Zheng Bao, Xian-Da Zhang

More information

Uncertainty quantification for Wavefield Reconstruction Inversion

Uncertainty quantification for Wavefield Reconstruction Inversion Uncertainty quantification for Wavefield Reconstruction Inversion Zhilong Fang *, Chia Ying Lee, Curt Da Silva *, Felix J. Herrmann *, and Rachel Kuske * Seismic Laboratory for Imaging and Modeling (SLIM),

More information

The Ensemble Kalman Filter:

The Ensemble Kalman Filter: p.1 The Ensemble Kalman Filter: Theoretical formulation and practical implementation Geir Evensen Norsk Hydro Research Centre, Bergen, Norway Based on Evensen 23, Ocean Dynamics, Vol 53, No 4 p.2 The Ensemble

More information

Non parametric Bayesian belief nets (NPBBNs) versus ensemble Kalman filter (EnKF) in reservoir simulation

Non parametric Bayesian belief nets (NPBBNs) versus ensemble Kalman filter (EnKF) in reservoir simulation Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics Non parametric Bayesian belief nets (NPBBNs) versus ensemble Kalman

More information

Ensemble square-root filters

Ensemble square-root filters Ensemble square-root filters MICHAEL K. TIPPETT International Research Institute for climate prediction, Palisades, New Yor JEFFREY L. ANDERSON GFDL, Princeton, New Jersy CRAIG H. BISHOP Naval Research

More information

B008 COMPARISON OF METHODS FOR DOWNSCALING OF COARSE SCALE PERMEABILITY ESTIMATES

B008 COMPARISON OF METHODS FOR DOWNSCALING OF COARSE SCALE PERMEABILITY ESTIMATES 1 B8 COMPARISON OF METHODS FOR DOWNSCALING OF COARSE SCALE PERMEABILITY ESTIMATES Alv-Arne Grimstad 1 and Trond Mannseth 1,2 1 RF-Rogaland Research 2 Now with CIPR - Centre for Integrated Petroleum Research,

More information

Reliability of Seismic Data for Hydrocarbon Reservoir Characterization

Reliability of Seismic Data for Hydrocarbon Reservoir Characterization Reliability of Seismic Data for Hydrocarbon Reservoir Characterization Geetartha Dutta (gdutta@stanford.edu) December 10, 2015 Abstract Seismic data helps in better characterization of hydrocarbon reservoirs.

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Experimental designs for precise parameter estimation for non-linear models

Experimental designs for precise parameter estimation for non-linear models Minerals Engineering 17 (2004) 431 436 This article is also available online at: www.elsevier.com/locate/mineng Experimental designs for precise parameter estimation for non-linear models Z. Xiao a, *,

More information

= (G T G) 1 G T d. m L2

= (G T G) 1 G T d. m L2 The importance of the Vp/Vs ratio in determining the error propagation and the resolution in linear AVA inversion M. Aleardi, A. Mazzotti Earth Sciences Department, University of Pisa, Italy Introduction.

More information

Linear Regression Linear Regression with Shrinkage

Linear Regression Linear Regression with Shrinkage Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle

More information

DATA ASSIMILATION FOR COMPLEX SUBSURFACE FLOW FIELDS

DATA ASSIMILATION FOR COMPLEX SUBSURFACE FLOW FIELDS POLITECNICO DI MILANO Department of Civil and Environmental Engineering Doctoral Programme in Environmental and Infrastructure Engineering XXVI Cycle DATA ASSIMILATION FOR COMPLEX SUBSURFACE FLOW FIELDS

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

Linear Algebra for Machine Learning. Sargur N. Srihari

Linear Algebra for Machine Learning. Sargur N. Srihari Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it

More information

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, 2017 Spis treści Website Acknowledgments Notation xiii xv xix 1 Introduction 1 1.1 Who Should Read This Book?

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML

More information

Achieving depth resolution with gradient array survey data through transient electromagnetic inversion

Achieving depth resolution with gradient array survey data through transient electromagnetic inversion Achieving depth resolution with gradient array survey data through transient electromagnetic inversion Downloaded /1/17 to 128.189.118.. Redistribution subject to SEG license or copyright; see Terms of

More information

Fast principal component analysis using fixed-point algorithm

Fast principal component analysis using fixed-point algorithm Pattern Recognition Letters 28 (27) 1151 1155 www.elsevier.com/locate/patrec Fast principal component analysis using fixed-point algorithm Alok Sharma *, Kuldip K. Paliwal Signal Processing Lab, Griffith

More information

CSE 554 Lecture 7: Alignment

CSE 554 Lecture 7: Alignment CSE 554 Lecture 7: Alignment Fall 2012 CSE554 Alignment Slide 1 Review Fairing (smoothing) Relocating vertices to achieve a smoother appearance Method: centroid averaging Simplification Reducing vertex

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

2 Tikhonov Regularization and ERM

2 Tikhonov Regularization and ERM Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods

More information

Preface to Second Edition... vii. Preface to First Edition...

Preface to Second Edition... vii. Preface to First Edition... Contents Preface to Second Edition..................................... vii Preface to First Edition....................................... ix Part I Linear Algebra 1 Basic Vector/Matrix Structure and

More information

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC

Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model. David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Lagrangian Data Assimilation and Manifold Detection for a Point-Vortex Model David Darmon, AMSC Kayo Ide, AOSC, IPST, CSCAMM, ESSIC Background Data Assimilation Iterative process Forecast Analysis Background

More information

Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model

Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model Handling nonlinearity in Ensemble Kalman Filter: Experiments with the three-variable Lorenz model Shu-Chih Yang 1*, Eugenia Kalnay, and Brian Hunt 1. Department of Atmospheric Sciences, National Central

More information

Least Absolute Shrinkage is Equivalent to Quadratic Penalization

Least Absolute Shrinkage is Equivalent to Quadratic Penalization Least Absolute Shrinkage is Equivalent to Quadratic Penalization Yves Grandvalet Heudiasyc, UMR CNRS 6599, Université de Technologie de Compiègne, BP 20.529, 60205 Compiègne Cedex, France Yves.Grandvalet@hds.utc.fr

More information

Bayesian Inverse problem, Data assimilation and Localization

Bayesian Inverse problem, Data assimilation and Localization Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37 Content What is Bayesian inverse problem? What is

More information

Deep Learning Book Notes Chapter 2: Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra Deep Learning Book Notes Chapter 2: Linear Algebra Compiled By: Abhinaba Bala, Dakshit Agrawal, Mohit Jain Section 2.1: Scalars, Vectors, Matrices and Tensors Scalar Single Number Lowercase names in italic

More information

component risk analysis

component risk analysis 273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain

More information

Linear Regression Linear Regression with Shrinkage

Linear Regression Linear Regression with Shrinkage Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle

More information

Organization. I MCMC discussion. I project talks. I Lecture.

Organization. I MCMC discussion. I project talks. I Lecture. Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems

More information

Enhanced linearized reduced-order models for subsurface flow simulation

Enhanced linearized reduced-order models for subsurface flow simulation Enhanced linearized reduced-order models for subsurface flow simulation J. He 1, J. Sætrom 2, L.J. Durlofsky 1 1 Department of Energy Resources Engineering, Stanford University, Stanford, CA 94305, U.S.A.

More information

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory Statistical Inference Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory IP, José Bioucas Dias, IST, 2007

More information

Inferring biological dynamics Iterated filtering (IF)

Inferring biological dynamics Iterated filtering (IF) Inferring biological dynamics 101 3. Iterated filtering (IF) IF originated in 2006 [6]. For plug-and-play likelihood-based inference on POMP models, there are not many alternatives. Directly estimating

More information

Fundamentals of Data Assimilation

Fundamentals of Data Assimilation National Center for Atmospheric Research, Boulder, CO USA GSI Data Assimilation Tutorial - June 28-30, 2010 Acknowledgments and References WRFDA Overview (WRF Tutorial Lectures, H. Huang and D. Barker)

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

REDUCING ORDER METHODS APPLIED TO RESERVOIR SIMULATION

REDUCING ORDER METHODS APPLIED TO RESERVOIR SIMULATION REDUCING ORDER METHODS APPLIED TO RESERVOIR SIMULATION Lindaura Maria Steffens Dara Liandra Lanznaster lindaura.steffens@udesc.br daraliandra@gmail.com Santa Catarina State University CESFI, Av. Central,

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Short tutorial on data assimilation

Short tutorial on data assimilation Mitglied der Helmholtz-Gemeinschaft Short tutorial on data assimilation 23 June 2015 Wolfgang Kurtz & Harrie-Jan Hendricks Franssen Institute of Bio- and Geosciences IBG-3 (Agrosphere), Forschungszentrum

More information

A hybrid Marquardt-Simulated Annealing method for solving the groundwater inverse problem

A hybrid Marquardt-Simulated Annealing method for solving the groundwater inverse problem Calibration and Reliability in Groundwater Modelling (Proceedings of the ModelCARE 99 Conference held at Zurich, Switzerland, September 1999). IAHS Publ. no. 265, 2000. 157 A hybrid Marquardt-Simulated

More information

Linear Regression (continued)

Linear Regression (continued) Linear Regression (continued) Professor Ameet Talwalkar Professor Ameet Talwalkar CS260 Machine Learning Algorithms February 6, 2017 1 / 39 Outline 1 Administration 2 Review of last lecture 3 Linear regression

More information

13. Nonlinear least squares

13. Nonlinear least squares L. Vandenberghe ECE133A (Fall 2018) 13. Nonlinear least squares definition and examples derivatives and optimality condition Gauss Newton method Levenberg Marquardt method 13.1 Nonlinear least squares

More information

Uncertainty Quantification in Discrete Fracture Network Models

Uncertainty Quantification in Discrete Fracture Network Models Uncertainty Quantification in Discrete Fracture Network Models Claudio Canuto Dipartimento di Scienze Matematiche, Politecnico di Torino claudio.canuto@polito.it Joint work with Stefano Berrone, Sandra

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Foundations of Computer Vision

Foundations of Computer Vision Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply

More information

The Bayesian approach to inverse problems

The Bayesian approach to inverse problems The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology ymarz@mit.edu, http://uqgroup.mit.edu

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Copyright. Marco Antonio Iglesias-Hernandez

Copyright. Marco Antonio Iglesias-Hernandez Copyright by Marco Antonio Iglesias-Hernandez 28 The Dissertation Committee for Marco Antonio Iglesias-Hernandez certifies that this is the approved version of the following dissertation: An Iterative

More information

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary

More information