An efficient sampling method for stochastic inverse problems

Size: px
Start display at page:

Download "An efficient sampling method for stochastic inverse problems"

Transcription

1 Comput Optim Appl (2007) 37: DOI /s An efficient sampling method for stochastic inverse problems Pierre Ngnepieba M.Y. Hussaini Received: 23 June 2005 / Revised: 15 November 2005 / Published online: 1 May 2007 Springer Science+Business Media, LLC 2007 Abstract A general framework is developed to treat inverse problems with parameters that are random fields. It involves a sampling method that exploits the sensitivity derivatives of the control variable with respect to the random parameters. As the sensitivity derivatives are computed only at the mean values of the relevant parameters, the related extra cost of the present method is a fraction of the total cost of the Monte Carlo method. The effectiveness of the method is demonstrated on an example problem governed by the Burgers equation with random viscosity. It is specifically shown that this method is two orders of magnitude more efficient compared to the conventional Monte Carlo method. In other words, for a given number of samples, the present method yields two orders of magnitude higher accuracy than its conventional counterpart. Keywords Monte Carlo method Data assimilation Error covariance matrix Sensitivity derivatives Burgers equation 1 Introduction Optimization problems under uncertain conditions have become a subject of increasing attention among scientists and engineers in the recent times [6, 9, 11]. First, such problems are more realistic and of real-world application value considering that the physical parameters, the initial and boundary conditions, and the data for validation or assimilation are random in nature and hence sources of uncertainty in reality. Second, despite the extra dimension of complexity due to randomness (which prohibited P. Ngnepieba Department of Mathematics, Florida A&M University, Tallahassee, FL 32307, USA M.Y. Hussaini ( ) School of Computational Science, Florida State University, Tallahassee, FL , USA susan@cespr.fsu.edu

2 122 P. Ngnepieba, M.Y. Hussaini attempts at solutions of such problems till recently), these problems have become tractable because of the high efficiency of the current numerical algorithms coupled with the high performance of the present day computing machinery. A general method of solving optimization problems with uncertain parameters is obviously a Monte Carlo method. It requires thousands of samples, and each sample in the present context requires solving partial differential equations. Its convergence is inversely proportional to the square root of the number of samples and directly proportional to the variance of the integrand. Various variance reduction methods have been proposed to accelerate convergence of these methods. Recently, Cao et al. [1, 2] proposed a sensitivity derivative based Monte Carlo method, which coupled with other traditional variance reduction technique provides several orders of magnitude improvement in convergence. The problem under consideration is an inverse problem involving the minimization of the cost function, which is the norm of the difference between the numerical solution and the observation, by controlling the initial condition. Optimal control of initial conditions is crucial for the predictive methodologies, for example, in meteorology. Since the original work of Le Dimet and Talagrand [7], it has become ubiquitous in atmospheric and oceanic sciences [4] where it is called the variational data assimilation problem solved by an optimal control technique. However, it should be pointed out that the present formulation carries over in a straightforward fashion to any inverse problem governed by partial differential equations (with random parameters) where the control variables are boundary conditions or physical parameters. A semidiscrete version of such problems reduces to the form discussed in the next section. The proposed method embeds the optimization process inside the outer Monte Carlo process. Then it leverages the (sensitivity derivative) information from the inner process to accelerate the convergence of the outer Monte Carlo sampling process. In other words, the proposed method efficiently computes the sensitivity derivative of the control variable (which is the optimal initial condition in the present instance) with respect to the random parameter. It uses the optimal initial condition and its sensitivity derivative to obtain the evolution of the state variable and its sensitivity derivative. On the other hand, Cao et al. [1, 2] assume the availability of the sensitivity derivative of the state variable with respect to the random parameter and exploit it to achieve convergence acceleration of Monte Carlo method by employing the residual in the first-order Taylor series expansion of the cost functional in terms of the stochastic parameter rather than the cost functional itself. 2 Problem statement and solution technique Consider the evolution equation dx = F(X,ξ), t (0,T), dt X(0) = u (2.1) where ξ is a parameter, X(t; u, ξ) is the state variable that belongs, for any t, to a Hilbert space X, u X is the control variable, and F is a nonlinear operator mapping X into X.

3 An efficient sampling method for stochastic inverse problems 123 We pose an inverse problem whose goal is to find a control u so that X is close to the observation/data X obs. To this end, we define the cost function J : T J(u,ξ)= 1 CX(t; u, ξ) X obs 2 2 X obs dt (2.2) 0 where the observation space X obs is a Hilbert space, and C : X X obs is a linear bounded operator. The cost function defines the discrepancy between the simulated value X(t; u, ξ) and the corresponding observation X obs, and implicitly depends on u and ξ through X. Then we define the following inverse problem: For a given ξ, find u X such that dx = F(X,ξ), t (0,T), dt X(0) = u, (2.3) J(u,ξ)= inf J(v,ξ). v 2.1 Problem statement If the parameter ξ in the above inverse (data assimilation) problem (2.3) is a random field defined on the probability space (, F, P), how does one quantify the uncertainty in the optimal initial condition u and X? In other words, given the pdf of the random field ξ, how does one determine the pdf of u and X, or their statistics such as their expectations and variances. An obvious approach is to combine an optimization method with a statistical method for uncertainty quantification. Such an approach would nest the optimization process of the control problem (inner loop) in the uncertainty quantification process (outer loop), say, a conventional Monte Carlo process. This becomes prohibitively expensive as the convergence of the Monte Carlo process may require a huge number of samples. However, it is possible to leverage information from the inner process/iteration to accelerate the outer process/iteration and vice versa. We propose an efficient sampling method to address this strategy. Efficiency stems from the exploitation of the so-called sensitivity derivative information, specifically the Jacobian of the optimal initial condition with respect to the random parameter, which is computed during the inner iteration process. Before proceeding to describe the method, we provide here, for the purpose of completeness, a brief description of optimality system. 2.2 Solution technique Let ξ denote a sample of real-valued random field ξ generated according to the pdf P. For this value of ξ, the optimization problem defined by (2.3) is deterministic. If J has a minimum then the optimality condition u J(u, ξ)= 0 holds. The gradient of the cost function J is obtained by using the adjoint model [7, 8]: dx = F(X, ξ), t (0,T), dt X(0) = u, (2.4)

4 124 P. Ngnepieba, M.Y. Hussaini dx [ ] F T + dt X (X, ξ) X = C T (CX X obs), t (0,T), X (T ) = 0, u J(u, ξ)= X (0) (2.5) with the unknowns X, X, and u, where [ F X (X, ξ)] T is the adjoint to the Frechet derivative of F, and C T is the adjoint to C defined by (CX, Y ) Xobs = (X, C T Y) X, X X, Y X obs. The optimality condition becomes u J(u, ξ)= X (0) = 0. (2.6) The solution of the minimization problem defined by (2.3) may be found by using the Newton s method: u k+1 = u k [ 2 u J(u k, ξ) ] 1. u J(u k,ξ) (2.7) where u k is the current estimate. The minimization procedure employs the limited-memory quasi-newton minimization algorithm [5] based on the BFGS (Broyden Fletcher Goldfarb Shanno) update formula for the inverse Hessian. The solution u ( ξ) of the optimality system ( ) (if it exists) is implicitly dependent on ξ through X. In a standard Monte Carlo method, samples {ξ i } N i=1 are generated by pdf P. For each ξ j, the solution of the optimality system yields the corresponding optimal initial condition u j u (ξ j ). The sample mean of the u j s is a natural estimator of E[u ]: E[u ]= 1 N N u j. It also yields histograms of the u j s that are the natural estimator of pdf of u and other statistics of u. Simultaneously, the evolution equation (2.4) is solved with the initial condition u j, which yields the mean and variance of the state variable X in some norm. For satisfactory convergence, a prohibitively large number of samples are required. A drawback is that the optimality system has to be solved prohibitively large number of times, which is not feasible for realistic problems. The present method first obtains the mean and covariance (or variance) of the optimal initial condition by efficiently constructing an approximate Jacobian matrix ξ u around the mean value of the random field ξ. Using this mean optimal initial condition and its Jacobian, the evolution equations for X and X ξ are solved, where from the mean and covariance of X are obtained. The procedure is as follows. Linearizing the random field ξ around its mean value E[ξ] (first-order Taylor expansion), we have j=1 u (ξ) u (E[ξ]) + [ ξ u (E[ξ]) ] (ξ E[ξ]). (2.8) Taking the expectation of both sides of (2.8) using the pdf of ξ, we get E[u ]=u (E[ξ]). (2.9)

5 An efficient sampling method for stochastic inverse problems 125 An approximate covariance matrix for u is obtained using (2.8): Cov(u ) = E [ (u E[u ])(u E[u ]) T ] =[ ξ u ] [ ξ u ] T (2.10) where = E[(ξ E[ξ])(ξ E[ξ]) T ] is the error covariance matrix of ξ. Once the Jacobian matrix ξ u is known, the pdf P u of the optimal control u can be approximated by the Gaussian distribution with the mean E[u ] defined by (2.9) and the covariance matrix given by (2.10). To obtain the mean E[X] of the state variable, we expand the state variable X(t; u (ξ), ξ) in a first-order Taylor series around the mean value E[ξ], X(t; u (ξ), ξ) X ( t; u (E[ξ]), E[ξ] ) + [ X ξ (t; u (E[ξ]), E[ξ]) ] (ξ E[ξ]) (2.11) and take the expectation of both sides of the equation, which yields E[X]=X ( t; u (E[ξ]), E[ξ] ) (2.12) where X(t; u (E[ξ]), E[ξ]) is the solution of the evolution equation, dx = F(X,E[ξ]), t (0,T), dt X(0) = u (E[ξ]). We obtain Cov(X) by using (2.11) in the definition of covariance: Cov(X) = E [( X ( t; u (ξ), ξ ) X ( t; u (E[ξ]), E[ξ] )) ( X ( t; u (ξ), ξ ) X ( t; u (E[ξ]), E[ξ] )) T ] = [ X ξ ( t; u (E[ξ]), E[ξ] )] [ X ξ ( t; u (E[ξ]), E[ξ] )] T (2.13) where X ξ (t; u (E[ξ]), E[ξ]) is the solution of the linear equation, [ dx ξ F ( = X(t; u (E[ξ]), E[ξ]) )] X ξ dt X [ F ( + X(t; u (E[ξ]), E[ξ]) )], t (0,T), ξ X ξ (0) = [ ξ u (E[ξ]) ]. The next section describes the computation of the Jacobian matrix ξ u Approximation of the Jacobian matrix ξ u Equation (2.8) can be written as P = BQ (2.14) where B is the Jacobian matrix ξ u, P ={p (j) }={u (ξ j ) u (E[ξ])}, j = 1, 2,...,m, and Q ={q (j) }={ξ j E[ξ]}, j = 1, 2,...,m. The vectors p (j) =

6 126 P. Ngnepieba, M.Y. Hussaini u (ξ j ) u (E[ξ]), j = 1, 2,...,m are obtained by solving the optimization problems (2.3) with ξ = ξ j, j = 1, 2,...,m; q (j) = ξ j E[ξ], j = 1, 2,...,mare simply obtained by sampling of ξ using its pdf P. The algorithm to compute the matrix B is explained in [3]. It is briefly described here. To find B that approximately maps Q into P, we find a least square solution to the matrix equation (2.14), by solving the following convex quadratic programming problem: Minimize P BQ 2 F (2.15) B where X 2 F = trace(xxt ) denotes the Frobenius norm of X. The column vectors b (j) of the matrix B are found by solving the linear system QQ T b (j) = PQ T (j) with PQ T (j) jth columns of the matrix PQ T. 3 Applications The present methodology is applied to two cases. The first case is a linear model problem which has an analytical solution. This simple example demonstrates that the sensitivity of the optimal initial condition to the random parameter is highly nonlinear even though the problem is linear. It also serves to validate our methodology. Second, we consider an inverse problem governed by the Burgers equation. 3.1 Linear model Consider the following data assimilation problem where the state variable is the solution of a linear equation dx = ξx, t (0, 1), dt X(0) = u, (3.1) J(u,ξ)= inf J(v,ξ) v where ξ is scalar real-valued random variable with known pdf P, and the cost function J is defined by 1 J(u,ξ)= 1 (X(t; u, ξ) X obs ) 2 dt. 2 0 The optimality system associated with the minimization of the cost function J is dx = ξx, t (0, 1) dt X(0) = u, dx + ξx (3.2) = (X X obs ), t (0, 1), dt X (1) = 0, u J(u,ξ)= X (0) = 0

7 An efficient sampling method for stochastic inverse problems 127 where X is the adjoint of X. The explicit solution of (3.2) is X(t;,u,ξ)= ue ξt, (3.3) X (t; u, ξ) = u 2ξ The gradient of J with respect of u is given by ( e ξt e ξt e 2ξ ) X obs ( 1 e ξt e ξ ). (3.4) ξ u J(u,ξ)= X (0; u, ξ) = u 2ξ ( e 2ξ 1 ) + X obs ( 1 e ξ ). ξ Thus, the optimal initial condition u (which is the solution of u J(X(u,ξ))= 0) is u = u (ξ) = 2X obs e ξ =: f(ξ). (3.5) + 1 In order to obtain the pdf, P u, of the optimal initial condition u, we consider the cumulative density function (cdf), F u, of the optimal initial condition u, which is defined as (we choose X obs = 1 for convenience) F u (z) = P u (u (ξ) z) = P(f (ξ) z) = 1 P(f (ξ) > z) = 1 P ( ξ<f 1 (z) ) = 1 F ξ ( f 1 (z) ) since f 1 is monotone decreasing on R. This implies The explicit derivation gives P u (x) = { 1 F ξ (f 1 (x)) }. P u (x) = P ( f 1 (x) ) (f 1 ) (x). Substituting (f 1 ) and f 1, we obtain P u (x) = 2 P log((2 x)/x)), x (0, 2). (3.6) x(2 x) Equation (3.6) gives the exact expression for the pdf of u which depends on the pdf of the random variable ξ. For instance, if ξ is normally distributed with mean μ and variance σ 2, then P(x) = e ((x μ)2 /(2σ 2 )) σ, x R. 2π Using (3.6), we get the pdf of the optimal initial condition as 2e (log((2 x)/x) μ)2 /(2σ 2 ) P u (x) = σ, x (0, 2) 2πx(2 x) 0, otherwise. (3.7)

8 128 P. Ngnepieba, M.Y. Hussaini The exact expressions for the mean and the variance of the optimal initial condition are E[u ]= 2 2 σ e 1 ( log((2 x)/x) μ ) 2 2 σ dx, 2π 0 (2 x) V(u ) = 2 2 σ (x E[u ]) 2 e 1 ( log((2 x)/x) μ ) 2 (3.8) 2 σ dx. 2π x(2 x) Validation of the present method 0 Choosing ξ N (μ, σ 2 ), μ = 2, σ = 0.1, the exact values of the mean and variance (computing the integrals in (3.8) using quadrature (trapezoidal) with 1000 points) are E ex [u ]= , V ex (u ) = E 4. The present method yields the mean E ap [u ] (using (2.9) and (3.5)) E ap [u 2 ]= e E[ξ] = (independent of samples), (3.9) and the variance V ap (u ) (using (2.10)) 4e 2E[ξ] V ap (u ) = [ f (E[ξ]) ] 2 V(ξ) = V(ξ). (3.10) (e E[ξ] 4 + 1) The absolute error in mean E mean = E ex [u ] E ap [u ] = 7.9E 4. Using the exact value of [f (E[ξ])], we find V ap (u ) = E 4, the absolute error in variance E V = V ex (u ) V ap (u ) = E 6. Here, we use the explicit expressions for the optimal initial condition as a function of the random parameter and its derivative with respect to ξ at its mean value, i.e., u (ξ) = f(ξ)= 2 e ξ + 1 and u ξ (E[ξ]) = f (E[ξ]) = 2eE[ξ] (e E[ξ] + 1) 2. Thus, these error estimates are the lower bound of approximation errors of the present method Approximation of the derivative [f (E[ξ])] using samples The present method provides the following approximation of the sensitivity of u with respect to ξ: [ N ][ f PM (E[ξ]) = N p k q k ( q k ) ] 1 2 (3.11) k=1 where p k = u (ξ k ) u (E[ξ]) and q k = ξ k E[ξ], k= 1, 2,...,N.Using(3.11), the variance of u is V PM (u ) = [ f PM (E[ξ])] 2 V(ξ). (3.12) Figure 1 plots the error E var = V ex (u ) V(u ) as a function of the sample size N, where V(u ) is obtained by the Monte Carlo method and the present method. For k=1

9 An efficient sampling method for stochastic inverse problems 129 Fig. 1 Error in variance of optimal initial condition vs. sample size a given sample size, the corresponding absolute value of the error (in both cases) is computed with an ensemble average of 100 realizations of ξ. It is apparent that the error in both methods decays at the expected 1/ N rate. The present method achieves an order of magnitude reduction in error. However, as N increases beyond (N 10 4 ), the error (swamped by the linear tangent approximation error) remains constant at E Probability density function of the optimal initial condition u (ξ) The exact pdf of u is known from (3.7): 2 x ((log( 2e x ) μ)2 /(2σ 2 )) P u (x) = σ, x (0, 2), 2πx(2 x) 0, otherwise where μ = 2 and σ = 0.1. The present method yields the mean of the optimal initial condition (3.9): E PM [u ]=E ap [u ]= 2 e E[ξ] + 1 = (independent of samples) and the values of variance V PM = E 4, E 4, and E 4 for N = 10 3,10 4, and 10 5 respectively. Using this mean and these values of variance and

10 130 P. Ngnepieba, M.Y. Hussaini assuming a normal distribution, the present method constructs the pdf of the optimal initial condition u, which is compared with the exact pdf of u (3.7)inFig.2. The pdf of u obtained by the present method at N = 10 3,10 4, and 10 5 are indistinguishable, which is not surprising in that the difference in the values of variance is negligibly small. The agreement with the exact pdf is excellent except for the slight discrepancy at the tail Probability density function of the state variable X(t; u (ξ), ξ) We have from (3.3) X(t; u (ξ), ξ) = u (ξ)e tξ = 2etξ e ξ + 1 =: g t(ξ), t (0, 1). Using the present method, we have from (2.12) the mean of X: 2etE[ξ] E[X]=X ( t; u (E[ξ]), E[ξ] ) = e (E[ξ]) + 1 (3.13) and from (2.13) the variance of X: V(X) = [ g t (E[ξ])] 2 V(ξ) (3.14) Fig. 2 Probability density function of u

11 An efficient sampling method for stochastic inverse problems 131 where g t (x) = 2ext (e x + 1) 2 (ex (t 1) + t). Then, the exact pdf of the state variable is given by P t X (z) = P( g 1 t (z) )( gt 1 ) (z) P(g 1 t (z)) = g t(g 1 t (z)). (3.15) The inverse (g 1 t (z)) cannot be obtained analytically. However, it can be computed to any order of accuracy, say, by Newton iteration. Once P t X (z) is known, we compute ( numerically exactly ) the mean and the variance of the state variable: { E[X]= zp t X (z) dz, V(X) = (z E[X]) 2 P t X (z) dz. (3.16) Figure 3 compares the mean of the state variable obtained by the present method (3.13) with the exact mean (3.16) as a function of time. Figure 4 provides similar comparison for the variance of the state variable as a function of time. We note that the agreement is excellent. Fig. 3 Expectation of state variable X

12 132 P. Ngnepieba, M.Y. Hussaini Fig. 4 Variance of state variable X The mean and variance of the state variable are time dependent. At instants t = 0.25, t = 0.5, t = 0.75, and t = 0.95, the present method gives (E[X] = , V(X) = E 4), (E[X]= , V(X) = E 4), (E[X]= , V(X) = E 4), and (E[X]= , V(X) = E 4), respectively. These values of mean and variance of the state variable are used to construct its Gaussian pdf and are plotted in Fig. 5 along with the exact pdf of the state variable at exactly the same instants. Once again, the agreement is found to be very good. 3.2 Burgers equation We seek the solution of the stochastic Burgers equation, u t + 1 u 2 2 x ( ν(x) u ) = 0, t,x (0, 1), x x u(0,x)= 2 x 2, u(t,0) = u(t, 1) = 0. The viscosity parameter ν is assumed to be a Gaussian random field with mean E[ν] = x; its variance is assumed isotropic with an exponential autocorrelation length (see, e.g., [6, 10]) defined by C(x,y) = σ 2 e ( x y /L), where L = 1 and σ 2 = The observation X obs (t, x) is the solution of the model with viscosity ν(x) = x.

13 An efficient sampling method for stochastic inverse problems 133 Fig. 5 Probability density function of state variable The objective is to quantify the uncertainty in optimal initial condition u and its evolution. In other words, we would like to obtain the statistics, such as the mean and variance, of the optimal initial condition u and the state variable X Numerical results and discussion The Burgers equation is discretized with centered finite-difference scheme in space with 100 points and forward Euler in time. The unconstrained minimization algorithm of the quasi-newton limited memory type [5] with the convergence criterion either on the number of iterations or on the gradient norm of the cost function is used to determine the optimal initial condition. The decay is about 10 5 in 19 iterations. This shows the convergence of the identification process at the mean value of the viscosity. The standard Monte Carlo simulation employed a sample size of 400,000 for the viscosity parameter to compute the mean and covariance of the optimal initial condition and the state variable for the purpose of validating the results of the present method. It took approximately 4 weeks on Dell PC (P4, 2.26 GHZ, 512 MB). In the present approach, the error covariance matrix is approximated by = 1 N 1 N (ν j E[ν])(ν j E[ν]) T j=1

14 134 P. Ngnepieba, M.Y. Hussaini Fig. 6 Gradient of cost function J (at the mean value of viscosity E[ν]= x) with N = 35 realizations of the random viscosity. Approximation of the statistical first and second moments is carried out using (2.9) for the mean and (2.10) forthe covariance matrix with 2500 samples of the viscosity and the optimal initial condition. It took 1.6 h on Dell PC (P4, 2.26 GHZ, 512 MB). Figure 6 presents the decay of the gradient norm J(u,E[ν]) 2 = ( 100 ( k J(u,E[ν]) ) 2 k=1 ) 1/2 of the objective functional versus the number of iterations for viscosity ν = E[ν]. It decays very fast to about 10 5 in 19 iterations. Figure 7 shows the comparison between the mean of the optimal initial condition E[u ]= 1 n ni=1 u (ν i ) obtained by the standard Monte Carlo simulation and the present method at the mean value of the viscosity u (E[ν]). We observe that the agreement is excellent. Figure 8 displays the approximate covariance matrix Cov(u ) of the optimal initial condition. The diagonal of this covariance matrix represents the variance V(u );this variance is compared with the variance obtained by the standard Monte Carlo method in Fig. 9. The agreement is good except towards the end of the space distribution. The discrepancy is probably due to the inadequacy of the first-order Taylor expansion and the nonlinearity of the problem. Figure 10 presents the l 2 norm of the variance of the optimal initial condition, V 2 = ( 100 k=1 Vk 2 ) 1/2, where V k = V(u (x k,ξ)). The present method is found to

15 An efficient sampling method for stochastic inverse problems 135 Fig. 7 Optimal initial condition for E[ν] (continuous line) and mean of optimal initial condition obtained form Monte Carlo simulation (diamonds) Fig. 8 Covariance matrix Cov(u )

16 136 P. Ngnepieba, M.Y. Hussaini Fig. 9 Variance of u Fig. 10 l 2 norm of the variance of u

17 An efficient sampling method for stochastic inverse problems 137 Fig. 11 Comparison of errors (l 2 norm) converge initially at a rate propositional to 1/N 2 whereas the convergence rate of the standard Monte Carlo method is found proportional to 1/N. Thus, The convergence rate of the present method is two orders of magnitude faster than the Monte Carlo method. However, the accuracy of the converged l 2 norm of the variance in the present method is affected by the approximation in the construction of the Jacobian of the optimal initial condition with respect to the random parameter. The approximation in the minimization procedure equally affects both methods. The variance of the optimal initial condition V(u ) determined by the MC simulation with a million samples is taken as the reference for computing the l 2 norm of error E var = (V ref (u ) V(u )): E var 2 = ( 100 ) 1/2 V k V k 2 ref. k=1 Figure 11 displays the l 2 norm of error E V as a function of sample size N for the Monte Carlo simulation and the present method. It is to be noted that the convergence rate for the Monte Carlo method is proportional to the expected 1/ N, and the present method s convergence rate is proportional to 1/N 2 till it stagnates affected by the error inherent in the linear tangent approximation.

18 138 P. Ngnepieba, M.Y. Hussaini 4 Conclusions An efficient sampling method is developed to solve stochastic inverse problems with a random parameter. It involves the sensitivity derivatives of the control variable with respect to the random parameter. Specifically, the present method quantifies the uncertainty (due to the random parameter) of the optimal initial condition of a data assimilation problem and its impact on the evolution of the state variable. The paper discusses a linear example that permits analytical construction of the pdf of the optimal initial condition given the pdf of the random parameter. It provides insight into the nonlinear sensitivity of the optimal initial condition to the random parameter even in the linear case, and it further validated the present method. Then the effectiveness of the present method is demonstrated in the nonlinear example of an inverse problem governed by the Burgers equation. Acknowledgement This work was partially supported by NASA Ames Research Center (contract NCC2-1349). It is a pleasure to acknowledge the comments of Professor William Hager. References 1. Cao, Y., Hussaini, M.Y., Zang, T.A.: An efficient Monte Carlo method for optimal control problem with uncertainty. Comput. Optim. Appl. 26, (2003) 2. Cao, Y., Hussaini, M.Y., Zang, T.A.: Exploitation of sensitivity derivatives for improving sampling methods. AIAA J. 42(4), (2004) 3. Fletcher, R., Grothey, A., Leyffer, S., Computing sparse Hessian and Jacobian approximations with optimal hereditary properties. In: Biegler, L.T., Coleman, T.F., Conn, A.R., Santosa, F.N. (eds.) Large- Scale Optimization with Applications, Part II: Optimal Design and Control. Springer, Berlin (1997). 4. Ghil, M., Ide, K.: Introduction. In: Ghil, M., et al. (eds.) Data Assimilation in Meteorology and Oceanography: Theory and Practice, pp. i iii. Meteorological Society of Japan and Universal Academy Press, Tokyo (1997) 5. Gilbert, J.C., Lemarechal, C.: Some numerical experiments with variable-storage quasi-newton algorithms. Math. Program. 45, (1989) 6. Huyse, L., Walters, R.W.: Random field solutions including boundary condition uncertainty for the steady-state generalized Burgers equation. NASA/CR , ICASE report, No (2001) 7. Le Dimet, F.-X., Talagrand, O.: Variational algorithms for analysis and assimilation of meteorological observations: theoretical aspects. Tellus 38A, (1986) 8. Lions, J.L.: Optimal Control of Systems Governed by Partial Differential Equations. Springer, Berlin (1971) (Translated by Mitter, S.K.) 9. Putko, M.M., Newman, P.A., Taylor III, A.C., Green, L.L.: Approach for uncertainty propagation and robust design in CFD using sensitivity derivatives. AIAA paper, No (2001) 10. Tarantola, A.: Inverse Problem Theory: Methods for Data Fitting and Model Parameters Estimation. Elsevier, Amsterdam (1987) 11. Walters, R.W., Huyse, L.: Uncertainty analysis for fluid mechanics with applications. NASA/CR , ICASE report, No (2002)

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations An Extended Abstract submitted for the 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada January 26 Preferred Session Topic: Uncertainty quantification and stochastic methods for CFD A Non-Intrusive

More information

The Inversion Problem: solving parameters inversion and assimilation problems

The Inversion Problem: solving parameters inversion and assimilation problems The Inversion Problem: solving parameters inversion and assimilation problems UE Numerical Methods Workshop Romain Brossier romain.brossier@univ-grenoble-alpes.fr ISTerre, Univ. Grenoble Alpes Master 08/09/2016

More information

Lecture 7 Unconstrained nonlinear programming

Lecture 7 Unconstrained nonlinear programming Lecture 7 Unconstrained nonlinear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

Keywords: Sonic boom analysis, Atmospheric uncertainties, Uncertainty quantification, Monte Carlo method, Polynomial chaos method.

Keywords: Sonic boom analysis, Atmospheric uncertainties, Uncertainty quantification, Monte Carlo method, Polynomial chaos method. Blucher Mechanical Engineering Proceedings May 2014, vol. 1, num. 1 www.proceedings.blucher.com.br/evento/10wccm SONIC BOOM ANALYSIS UNDER ATMOSPHERIC UNCERTAINTIES BY A NON-INTRUSIVE POLYNOMIAL CHAOS

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM

The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM The Impact of Background Error on Incomplete Observations for 4D-Var Data Assimilation with the FSU GSM I. Michael Navon 1, Dacian N. Daescu 2, and Zhuo Liu 1 1 School of Computational Science and Information

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Optimal control problems with PDE constraints

Optimal control problems with PDE constraints Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.

More information

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23

Optimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23 Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is

More information

Uncertainty quantification for Wavefield Reconstruction Inversion

Uncertainty quantification for Wavefield Reconstruction Inversion Uncertainty quantification for Wavefield Reconstruction Inversion Zhilong Fang *, Chia Ying Lee, Curt Da Silva *, Felix J. Herrmann *, and Rachel Kuske * Seismic Laboratory for Imaging and Modeling (SLIM),

More information

Lecture 14: October 17

Lecture 14: October 17 1-725/36-725: Convex Optimization Fall 218 Lecture 14: October 17 Lecturer: Lecturer: Ryan Tibshirani Scribes: Pengsheng Guo, Xian Zhou Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Model Calibration under Uncertainty: Matching Distribution Information

Model Calibration under Uncertainty: Matching Distribution Information Model Calibration under Uncertainty: Matching Distribution Information Laura P. Swiler, Brian M. Adams, and Michael S. Eldred September 11, 008 AIAA Multidisciplinary Analysis and Optimization Conference

More information

A Note on the Particle Filter with Posterior Gaussian Resampling

A Note on the Particle Filter with Posterior Gaussian Resampling Tellus (6), 8A, 46 46 Copyright C Blackwell Munksgaard, 6 Printed in Singapore. All rights reserved TELLUS A Note on the Particle Filter with Posterior Gaussian Resampling By X. XIONG 1,I.M.NAVON 1,2 and

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems

Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Maximum Likelihood Ensemble Filter Applied to Multisensor Systems Arif R. Albayrak a, Milija Zupanski b and Dusanka Zupanski c abc Colorado State University (CIRA), 137 Campus Delivery Fort Collins, CO

More information

5 Quasi-Newton Methods

5 Quasi-Newton Methods Unconstrained Convex Optimization 26 5 Quasi-Newton Methods If the Hessian is unavailable... Notation: H = Hessian matrix. B is the approximation of H. C is the approximation of H 1. Problem: Solve min

More information

Estimating functional uncertainty using polynomial chaos and adjoint equations

Estimating functional uncertainty using polynomial chaos and adjoint equations 0. Estimating functional uncertainty using polynomial chaos and adjoint equations February 24, 2011 1 Florida State University, Tallahassee, Florida, Usa 2 Moscow Institute of Physics and Technology, Moscow,

More information

An introduction to data assimilation. Eric Blayo University of Grenoble and INRIA

An introduction to data assimilation. Eric Blayo University of Grenoble and INRIA An introduction to data assimilation Eric Blayo University of Grenoble and INRIA Data assimilation, the science of compromises Context characterizing a (complex) system and/or forecasting its evolution,

More information

Improving the Convergence of Back-Propogation Learning with Second Order Methods

Improving the Convergence of Back-Propogation Learning with Second Order Methods the of Back-Propogation Learning with Second Order Methods Sue Becker and Yann le Cun, Sept 1988 Kasey Bray, October 2017 Table of Contents 1 with Back-Propagation 2 the of BP 3 A Computationally Feasible

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Matrix Derivatives and Descent Optimization Methods

Matrix Derivatives and Descent Optimization Methods Matrix Derivatives and Descent Optimization Methods 1 Qiang Ning Department of Electrical and Computer Engineering Beckman Institute for Advanced Science and Techonology University of Illinois at Urbana-Champaign

More information

Quasi-Newton Methods

Quasi-Newton Methods Newton s Method Pros and Cons Quasi-Newton Methods MA 348 Kurt Bryan Newton s method has some very nice properties: It s extremely fast, at least once it gets near the minimum, and with the simple modifications

More information

Quasi-Newton methods for minimization

Quasi-Newton methods for minimization Quasi-Newton methods for minimization Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universitá di Trento November 21 December 14, 2011 Quasi-Newton methods for minimization 1

More information

Uncertainty Quantification in Viscous Hypersonic Flows using Gradient Information and Surrogate Modeling

Uncertainty Quantification in Viscous Hypersonic Flows using Gradient Information and Surrogate Modeling Uncertainty Quantification in Viscous Hypersonic Flows using Gradient Information and Surrogate Modeling Brian A. Lockwood, Markus P. Rumpfkeil, Wataru Yamazaki and Dimitri J. Mavriplis Mechanical Engineering

More information

Performance Evaluation of Generalized Polynomial Chaos

Performance Evaluation of Generalized Polynomial Chaos Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

2. Quasi-Newton methods

2. Quasi-Newton methods L. Vandenberghe EE236C (Spring 2016) 2. Quasi-Newton methods variable metric methods quasi-newton methods BFGS update limited-memory quasi-newton methods 2-1 Newton method for unconstrained minimization

More information

Truncated Conjugate Gradient Method for History Matching in Reservoir Simulation

Truncated Conjugate Gradient Method for History Matching in Reservoir Simulation Trabalho apresentado no CNAC, Gramado - RS, 2016. Proceeding Series of the Brazilian Society of Computational and Applied athematics Truncated Conjugate Gradient ethod for History atching in Reservoir

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

A derivative-free nonmonotone line search and its application to the spectral residual method

A derivative-free nonmonotone line search and its application to the spectral residual method IMA Journal of Numerical Analysis (2009) 29, 814 825 doi:10.1093/imanum/drn019 Advance Access publication on November 14, 2008 A derivative-free nonmonotone line search and its application to the spectral

More information

44th AIAA Structures, Structural Dynamics and Mechanics Conference April 7 10, 2003/Norfolk, VA

44th AIAA Structures, Structural Dynamics and Mechanics Conference April 7 10, 2003/Norfolk, VA AIAA 003-1656 On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods Yanzhao Cao Florida A & M University Tallahassee, Florida M. Yousuff Hussaini Florida State University Tallahassee,

More information

A Stochastic Collocation based. for Data Assimilation

A Stochastic Collocation based. for Data Assimilation A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm

More information

Basics of Uncertainty Analysis

Basics of Uncertainty Analysis Basics of Uncertainty Analysis Chapter Six Basics of Uncertainty Analysis 6.1 Introduction As shown in Fig. 6.1, analysis models are used to predict the performances or behaviors of a product under design.

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows

Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Lagrangian Data Assimilation and Its Application to Geophysical Fluid Flows Laura Slivinski June, 3 Laura Slivinski (Brown University) Lagrangian Data Assimilation June, 3 / 3 Data Assimilation Setup:

More information

Two improved classes of Broyden s methods for solving nonlinear systems of equations

Two improved classes of Broyden s methods for solving nonlinear systems of equations Available online at www.isr-publications.com/jmcs J. Math. Computer Sci., 17 (2017), 22 31 Research Article Journal Homepage: www.tjmcs.com - www.isr-publications.com/jmcs Two improved classes of Broyden

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Gaussian Process Approximations of Stochastic Differential Equations

Gaussian Process Approximations of Stochastic Differential Equations Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Dan Cawford Manfred Opper John Shawe-Taylor May, 2006 1 Introduction Some of the most complex models routinely run

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Statistics and Data Analysis

Statistics and Data Analysis Statistics and Data Analysis The Crash Course Physics 226, Fall 2013 "There are three kinds of lies: lies, damned lies, and statistics. Mark Twain, allegedly after Benjamin Disraeli Statistics and Data

More information

CSCI-6971 Lecture Notes: Monte Carlo integration

CSCI-6971 Lecture Notes: Monte Carlo integration CSCI-6971 Lecture otes: Monte Carlo integration Kristopher R. Beevers Department of Computer Science Rensselaer Polytechnic Institute beevek@cs.rpi.edu February 21, 2006 1 Overview Consider the following

More information

On the Application of Intervening Variables for Stochastic Finite Element Analysis

On the Application of Intervening Variables for Stochastic Finite Element Analysis On the Application of Intervening Variables for Stochastic Finite Element Analysis M.A. Valdebenito a,, A.A. Labarca a, H.A. Jensen a a Universidad Tecnica Federico Santa Maria, Dept. de Obras Civiles,

More information

component risk analysis

component risk analysis 273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

REPORTS IN INFORMATICS

REPORTS IN INFORMATICS REPORTS IN INFORMATICS ISSN 0333-3590 A class of Methods Combining L-BFGS and Truncated Newton Lennart Frimannslund Trond Steihaug REPORT NO 319 April 2006 Department of Informatics UNIVERSITY OF BERGEN

More information

NUMERICAL SOLUTIONS FOR OPTIMAL CONTROL PROBLEMS UNDER SPDE CONSTRAINTS

NUMERICAL SOLUTIONS FOR OPTIMAL CONTROL PROBLEMS UNDER SPDE CONSTRAINTS NUMERICAL SOLUTIONS FOR OPTIMAL CONTROL PROBLEMS UNDER SPDE CONSTRAINTS AFOSR grant number: FA9550-06-1-0234 Yanzhao Cao Department of Mathematics Florida A & M University Abstract The primary source of

More information

OPTIMAL POWER FLOW (OPF) is a tool that has been

OPTIMAL POWER FLOW (OPF) is a tool that has been IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 20, NO. 2, MAY 2005 773 Cumulant-Based Probabilistic Optimal Power Flow (P-OPF) With Gaussian and Gamma Distributions Antony Schellenberg, William Rosehart, and

More information

. Frobenius-Perron Operator ACC Workshop on Uncertainty Analysis & Estimation. Raktim Bhattacharya

. Frobenius-Perron Operator ACC Workshop on Uncertainty Analysis & Estimation. Raktim Bhattacharya .. Frobenius-Perron Operator 2014 ACC Workshop on Uncertainty Analysis & Estimation Raktim Bhattacharya Laboratory For Uncertainty Quantification Aerospace Engineering, Texas A&M University. uq.tamu.edu

More information

Quadrature based Broyden-like method for systems of nonlinear equations

Quadrature based Broyden-like method for systems of nonlinear equations STATISTICS, OPTIMIZATION AND INFORMATION COMPUTING Stat., Optim. Inf. Comput., Vol. 6, March 2018, pp 130 138. Published online in International Academic Press (www.iapress.org) Quadrature based Broyden-like

More information

Safety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design

Safety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design Safety Envelope for Load Tolerance and Its Application to Fatigue Reliability Design Haoyu Wang * and Nam H. Kim University of Florida, Gainesville, FL 32611 Yoon-Jun Kim Caterpillar Inc., Peoria, IL 61656

More information

A projected Hessian for full waveform inversion

A projected Hessian for full waveform inversion CWP-679 A projected Hessian for full waveform inversion Yong Ma & Dave Hale Center for Wave Phenomena, Colorado School of Mines, Golden, CO 80401, USA (c) Figure 1. Update directions for one iteration

More information

Non-Linear Optimization

Non-Linear Optimization Non-Linear Optimization Distinguishing Features Common Examples EOQ Balancing Risks Minimizing Risk 15.057 Spring 03 Vande Vate 1 Hierarchy of Models Network Flows Linear Programs Mixed Integer Linear

More information

The Bock iteration for the ODE estimation problem

The Bock iteration for the ODE estimation problem he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12

More information

Stochastic Analogues to Deterministic Optimizers

Stochastic Analogues to Deterministic Optimizers Stochastic Analogues to Deterministic Optimizers ISMP 2018 Bordeaux, France Vivak Patel Presented by: Mihai Anitescu July 6, 2018 1 Apology I apologize for not being here to give this talk myself. I injured

More information

Organization. I MCMC discussion. I project talks. I Lecture.

Organization. I MCMC discussion. I project talks. I Lecture. Organization I MCMC discussion I project talks. I Lecture. Content I Uncertainty Propagation Overview I Forward-Backward with an Ensemble I Model Reduction (Intro) Uncertainty Propagation in Causal Systems

More information

Kalman Filter and Ensemble Kalman Filter

Kalman Filter and Ensemble Kalman Filter Kalman Filter and Ensemble Kalman Filter 1 Motivation Ensemble forecasting : Provides flow-dependent estimate of uncertainty of the forecast. Data assimilation : requires information about uncertainty

More information

A Penalized 4-D Var data assimilation method for reducing forecast error

A Penalized 4-D Var data assimilation method for reducing forecast error A Penalized 4-D Var data assimilation method for reducing forecast error M. J. Hossen Department of Mathematics and Natural Sciences BRAC University 66 Mohakhali, Dhaka-1212, Bangladesh I. M. Navon Department

More information

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems

Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming Problems International Journal of Scientific and Research Publications, Volume 3, Issue 10, October 013 1 ISSN 50-3153 Comparative study of Optimization methods for Unconstrained Multivariable Nonlinear Programming

More information

(Extended) Kalman Filter

(Extended) Kalman Filter (Extended) Kalman Filter Brian Hunt 7 June 2013 Goals of Data Assimilation (DA) Estimate the state of a system based on both current and all past observations of the system, using a model for the system

More information

Numerical methods for the Navier- Stokes equations

Numerical methods for the Navier- Stokes equations Numerical methods for the Navier- Stokes equations Hans Petter Langtangen 1,2 1 Center for Biomedical Computing, Simula Research Laboratory 2 Department of Informatics, University of Oslo Dec 6, 2012 Note:

More information

Conjugate Directions for Stochastic Gradient Descent

Conjugate Directions for Stochastic Gradient Descent Conjugate Directions for Stochastic Gradient Descent Nicol N Schraudolph Thore Graepel Institute of Computational Science ETH Zürich, Switzerland {schraudo,graepel}@infethzch Abstract The method of conjugate

More information

Bayesian Inverse problem, Data assimilation and Localization

Bayesian Inverse problem, Data assimilation and Localization Bayesian Inverse problem, Data assimilation and Localization Xin T Tong National University of Singapore ICIP, Singapore 2018 X.Tong Localization 1 / 37 Content What is Bayesian inverse problem? What is

More information

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent

Methods that avoid calculating the Hessian. Nonlinear Optimization; Steepest Descent, Quasi-Newton. Steepest Descent Nonlinear Optimization Steepest Descent and Niclas Börlin Department of Computing Science Umeå University niclas.borlin@cs.umu.se A disadvantage with the Newton method is that the Hessian has to be derived

More information

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES 9 th ASCE Specialty Conference on Probabilistic Mechanics and Structural Reliability EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES Nam H. Kim and Haoyu Wang University

More information

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS

SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS SECTION: CONTINUOUS OPTIMISATION LECTURE 4: QUASI-NEWTON METHODS HONOUR SCHOOL OF MATHEMATICS, OXFORD UNIVERSITY HILARY TERM 2005, DR RAPHAEL HAUSER 1. The Quasi-Newton Idea. In this lecture we will discuss

More information

Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors

Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors Ensemble prediction and strategies for initialization: Tangent Linear and Adjoint Models, Singular Vectors, Lyapunov vectors Eugenia Kalnay Lecture 2 Alghero, May 2008 Elements of Ensemble Forecasting

More information

Accurate numerical orbit propagation using Polynomial Algebra Computational Engine PACE. ISSFD 2015 congress, Germany. Dated: September 14, 2015

Accurate numerical orbit propagation using Polynomial Algebra Computational Engine PACE. ISSFD 2015 congress, Germany. Dated: September 14, 2015 Accurate numerical orbit propagation using Polynomial Algebra Computational Engine PACE Emmanuel Bignon (), Pierre mercier (), Vincent Azzopardi (), Romain Pinède () ISSFD 205 congress, Germany Dated:

More information

Structural Reliability

Structural Reliability Structural Reliability Thuong Van DANG May 28, 2018 1 / 41 2 / 41 Introduction to Structural Reliability Concept of Limit State and Reliability Review of Probability Theory First Order Second Moment Method

More information

Day 3 Lecture 3. Optimizing deep networks

Day 3 Lecture 3. Optimizing deep networks Day 3 Lecture 3 Optimizing deep networks Convex optimization A function is convex if for all α [0,1]: f(x) Tangent line Examples Quadratics 2-norms Properties Local minimum is global minimum x Gradient

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 3. Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 3 Gradient Method Shiqian Ma, MAT-258A: Numerical Optimization 2 3.1. Gradient method Classical gradient method: to minimize a differentiable convex

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

Fisher Information in Gaussian Graphical Models

Fisher Information in Gaussian Graphical Models Fisher Information in Gaussian Graphical Models Jason K. Johnson September 21, 2006 Abstract This note summarizes various derivations, formulas and computational algorithms relevant to the Fisher information

More information

Relative Merits of 4D-Var and Ensemble Kalman Filter

Relative Merits of 4D-Var and Ensemble Kalman Filter Relative Merits of 4D-Var and Ensemble Kalman Filter Andrew Lorenc Met Office, Exeter International summer school on Atmospheric and Oceanic Sciences (ISSAOS) "Atmospheric Data Assimilation". August 29

More information

Uncertainty quantification for sparse solutions of random PDEs

Uncertainty quantification for sparse solutions of random PDEs Uncertainty quantification for sparse solutions of random PDEs L. Mathelin 1 K.A. Gallivan 2 1 LIMSI - CNRS Orsay, France 2 Mathematics Dpt., Florida State University Tallahassee, FL, USA SIAM 10 July

More information

Covariance Matrix Simplification For Efficient Uncertainty Management

Covariance Matrix Simplification For Efficient Uncertainty Management PASEO MaxEnt 2007 Covariance Matrix Simplification For Efficient Uncertainty Management André Jalobeanu, Jorge A. Gutiérrez PASEO Research Group LSIIT (CNRS/ Univ. Strasbourg) - Illkirch, France *part

More information

DATA ASSIMILATION FOR FLOOD FORECASTING

DATA ASSIMILATION FOR FLOOD FORECASTING DATA ASSIMILATION FOR FLOOD FORECASTING Arnold Heemin Delft University of Technology 09/16/14 1 Data assimilation is the incorporation of measurement into a numerical model to improve the model results

More information

Uncertainty Quantification in Computational Science

Uncertainty Quantification in Computational Science DTU 2010 - Lecture I Uncertainty Quantification in Computational Science Jan S Hesthaven Brown University Jan.Hesthaven@Brown.edu Objective of lectures The main objective of these lectures are To offer

More information

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations Simo Särkkä Aalto University Tampere University of Technology Lappeenranta University of Technology Finland November

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Variational data assimilation

Variational data assimilation Background and methods NCEO, Dept. of Meteorology, Univ. of Reading 710 March 2018, Univ. of Reading Bayes' Theorem Bayes' Theorem p(x y) = posterior distribution = p(x) p(y x) p(y) prior distribution

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This

More information

Solving the steady state diffusion equation with uncertainty Final Presentation

Solving the steady state diffusion equation with uncertainty Final Presentation Solving the steady state diffusion equation with uncertainty Final Presentation Virginia Forstall vhfors@gmail.com Advisor: Howard Elman elman@cs.umd.edu Department of Computer Science May 6, 2012 Problem

More information

Efficient Leaping Methods for Stochastic Chemical Systems

Efficient Leaping Methods for Stochastic Chemical Systems Efficient Leaping Methods for Stochastic Chemical Systems Ioana Cipcigan Muruhan Rathinam November 18, 28 Abstract. Well stirred chemical reaction systems which involve small numbers of molecules for some

More information

4. DATA ASSIMILATION FUNDAMENTALS

4. DATA ASSIMILATION FUNDAMENTALS 4. DATA ASSIMILATION FUNDAMENTALS... [the atmosphere] "is a chaotic system in which errors introduced into the system can grow with time... As a consequence, data assimilation is a struggle between chaotic

More information

Sensitivity analysis in variational data assimilation and applications

Sensitivity analysis in variational data assimilation and applications Sensitivity analysis in variational data assimilation and applications Dacian N. Daescu Portland State University, P.O. Box 751, Portland, Oregon 977-751, U.S.A. daescu@pdx.edu ABSTRACT Mathematical aspects

More information

Sensitivity and Reliability Analysis of Nonlinear Frame Structures

Sensitivity and Reliability Analysis of Nonlinear Frame Structures Sensitivity and Reliability Analysis of Nonlinear Frame Structures Michael H. Scott Associate Professor School of Civil and Construction Engineering Applied Mathematics and Computation Seminar April 8,

More information

Chapter 2 Event-Triggered Sampling

Chapter 2 Event-Triggered Sampling Chapter Event-Triggered Sampling In this chapter, some general ideas and basic results on event-triggered sampling are introduced. The process considered is described by a first-order stochastic differential

More information

OPTIMAL PERTURBATION OF UNCERTAIN SYSTEMS

OPTIMAL PERTURBATION OF UNCERTAIN SYSTEMS Stochastics and Dynamics c World Scientific Publishing Company OPTIMAL PERTURBATION OF UNCERTAIN SYSTEMS BRIAN F. FARRELL Division of Engineering and Applied Sciences, Harvard University Pierce Hall, 29

More information

Math 105 Course Outline

Math 105 Course Outline Math 105 Course Outline Week 9 Overview This week we give a very brief introduction to random variables and probability theory. Most observable phenomena have at least some element of randomness associated

More information

Non-linear least squares

Non-linear least squares Non-linear least squares Concept of non-linear least squares We have extensively studied linear least squares or linear regression. We see that there is a unique regression line that can be determined

More information

AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.

AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S. Bull. Iranian Math. Soc. Vol. 40 (2014), No. 1, pp. 235 242 Online ISSN: 1735-8515 AN EIGENVALUE STUDY ON THE SUFFICIENT DESCENT PROPERTY OF A MODIFIED POLAK-RIBIÈRE-POLYAK CONJUGATE GRADIENT METHOD S.

More information

Convex optimization problems. Optimization problem in standard form

Convex optimization problems. Optimization problem in standard form Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality

More information

x n+1 = x n f(x n) f (x n ), n 0.

x n+1 = x n f(x n) f (x n ), n 0. 1. Nonlinear Equations Given scalar equation, f(x) = 0, (a) Describe I) Newtons Method, II) Secant Method for approximating the solution. (b) State sufficient conditions for Newton and Secant to converge.

More information

Short tutorial on data assimilation

Short tutorial on data assimilation Mitglied der Helmholtz-Gemeinschaft Short tutorial on data assimilation 23 June 2015 Wolfgang Kurtz & Harrie-Jan Hendricks Franssen Institute of Bio- and Geosciences IBG-3 (Agrosphere), Forschungszentrum

More information

Convex Optimization. Problem set 2. Due Monday April 26th

Convex Optimization. Problem set 2. Due Monday April 26th Convex Optimization Problem set 2 Due Monday April 26th 1 Gradient Decent without Line-search In this problem we will consider gradient descent with predetermined step sizes. That is, instead of determining

More information

New Fast Kalman filter method

New Fast Kalman filter method New Fast Kalman filter method Hojat Ghorbanidehno, Hee Sun Lee 1. Introduction Data assimilation methods combine dynamical models of a system with typically noisy observations to obtain estimates of the

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

Rough Burgers-like equations with multiplicative noise

Rough Burgers-like equations with multiplicative noise Rough Burgers-like equations with multiplicative noise Martin Hairer Hendrik Weber Mathematics Institute University of Warwick Bielefeld, 3.11.21 Burgers-like equation Aim: Existence/Uniqueness for du

More information