2010 AIAA SDM Student Symposium Constructing Response Surfaces Using Imperfect Function Evaluations

Size: px
Start display at page:

Download "2010 AIAA SDM Student Symposium Constructing Response Surfaces Using Imperfect Function Evaluations"

Transcription

1 51st AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference<BR>18th April 2010, Orlando, Florida AIAA AIAA SDM Student Symposium Constructing Response Surfaces Using Imperfect Function Evaluations John Axerio Stanford University, Stanford CA, 94305, USA Qiqi Wang Massachusetts Institute of Technology, Cambridge MA, 02139, USA and Gianluca Iaccarino Stanford University, Stanford CA, 94305, USA Five methods to construct response surfaces from a finite number of function evaluations are compared for several analytical test functions. A recently developed stochastic approximation method is compared to Kriging and to polynomial interpolation approaches. It is shown that if there is uncertainty for a subset of function evaluations, it is more advantageous to include aberrant data (if error bounds are provided) in the construction of the response surface, as opposed to completely disregarding uncertain realizations. I. Introduction Uncertainty quantification has become increasingly prevalent in recent years due to the difficulties in providing confidence when simulating real world systems. The computational expense of running a full factorial design with high dimensionality and large parameter space is prohibitive in many engineering applications especially those governed by coupled partial differential equations (PDE s). An example is the analysis of fluid flow problems that require the solution of a system of nonlinear, time-dependent PDE s called the Navier-Stokes equations. The time and effort involved in producing an accurate simulation to resolve the turbulence, boundary layers, and shock-wave interactions for high speed flows remains prohibitive even on large computer clusters. 1 Engineers and scientists often use Response Surface Methodology (RSM) in situations where several input variables potentially influence some performance measure (response) of a process. The field of RSM consists of the experimental strategy for exploring the space of the process or independent variables, empirical statistical modeling to develop an appropriate approximating relationship between the yield and the process variables, and optimization methods for finding the values of the process variables that produce desirable values of the response. Finding an appropriate approximating relationship typically involves interpolation or regression to fill the remaining parameter space so that the solution statistics can be computed. The difference between interpolation and regression being that with interpolation the response surface must pass exactly through all the data points while with regression this is often not the case because the response surface will be fitted to the data minimizing the distance between the model and the data points. Experiments in fluid mechanics are time consuming, and using RSM to optimize a response variable based on inlet or geometrical parameters is far too costly. As a result, the designer is forced to a limited number of experimental or simulation runs, and the question becomes which cases to select and how to build the response surface to gain as much insight into the problem as possible. Analytical solutions to determine mean and variance (or other solution statistics) do not exist for nonlinear high dimensional problems. These metrics can be computed numerically directly from data (e.g. Monte Carlo) or by building an approximated surface response and then sampling. As long as the number of collocation points is not Ph.D. Candidate, Mechanical Engineering, 488 Escondido Mall, Stanford CA, , AIAA Student Member. Assistant Professor, Aeronautics and Astronautics, 77 Massachusetts Avenue, , Cambridge MA, Assistant Professor, Mechanical Engineering, 488 Escondido Mall, Stanford CA, of 29 Copyright 2010 by the American Institute of Aeronautics and American Astronautics, Institute Inc. All of rights Aeronautics reserved. and Astronautics

2 extremely low, more accurate statistical metrics are estimated from the response surface approximations, as opposed to estimating metrics directly from the data. 2 If high fidelity computer simulations are used to calculate the response for all runs, an interpolation type method is more appropriate, whereby the response surface passes through all realizations exactly. 3 Conversely, if experimental realizations or imperfect solutions (not well converged, bad quality grids, illconditioned, etc.) are used to construct the response surface, a regression type method should be used to account for the uncertainty present. 4 In practical cases, it is possible that the limitations in the experimental settings do not allow full coverage of the entire parameter space of interest. It is desirable to use simulations to cover these black zones. What is the best way to seamlessly combine interpolation and regression methods to ensure continuity in response surfaces? The use of proper orthogonal decomposition (POD) or Kriging interpolation to fill in gappy spatio-temporal data for unsteady fluid flow applications has been explored. 5 It was shown that if there was insufficient temporal resolution, large spatial gappiness or flow fields with black zones Kriging outperformed POD-based methods. In the case of sufficient temporal resolution the converse was true. Other approximation methods that fill in black zones also appeared in the literature. One particular method used subset designs (as opposed to full factorial designs) as well as the minimax loss criterion to increase the robustness of response surface designs to missing observations. 6 It is common for fluid flow simulations to exhibit unsteadiness due to a variety of factors. Oscillations can be attributed to the physical nature of the problem (e.g. vortex shedding, turbulence, etc.) or due to numerical instabilities. If a nominal steady state solution is sought but insufficient iterations are carried out in a time marching scheme, the solution will change, and could eventually fluctuate around a nominal value. The variability of the quantity of interest (response variable) is critical to construct a response surface. Convergence in CFD simulations is typically measured by the residuals (which are a measure of the extent of imbalances arising from solving discretized PDE s) and monitoring of integrated forces, moments, or mass flow. If one or more realizations are not converged, causing the response variable to be wrong or uncertain, this could cause sharp discontinuities in the response surface. Simulations with similar input parameters could be phenomenologically different from one another; for instance, if the flow is supersonic and is governed by a strong shock, there could be a range of inlet parameters that do not produce shocks at all. In this case, the response surface of a certain flow characteristic would not be smooth, and it would not be surprising to encounter discontinuities. 7 Inaccurate results in computer simulations can also be attributed to sources of error such as discretization error, round-off error, iteration or convergence error, physical-modeling error, and human error. 8 As a result of all of these errors and uncertainties, the simulations are not truly deterministic, and a regression type method should be used to build the response surface. There are three types of scenarios that characterize the response in the parameter space. The white zone is where all the data is fully trusted. This zone is comprised of precise grid converged function evaluations with no uncertainty. The complement of the white zone is the black zone where there is no information about the response variable. This could be due to lost data or simply because the data is impossible to compute. The remainder of the parameter space is called the gray zone. Any data inside this zone is available but not completely trusted. Sometimes a particular CFD problem with specific input parameters does not converge. If this happens, the designer is forced to perturb the input parameters slightly, which could or could not cause the simulation to converge. Instead of running a new simulation which does not guarantee success, it is proposed to use the imprecise value for the response variable, along with the corresponding variability (which can be extracted from monitoring the response variable at every iteration) instead of completely disregarding the realization. This study aims at answering the following question, If there is a certain range of input parameters that produces uncertain results, what is the most accurate response surface construction method?. Several options are summarized below, and fully detailed in the results section of this paper: Using just the realizations close to the uncertain region (nearest neighbors) to build the response surface, disregarding completely any function evaluations that lie in the uncertain or gray area Passing the response surface exactly through all realizations regardless if they are uncertain or not using an interpolation type method Ignoring all uncertain realizations, and use either a regression type method that uses a least squares algorithm to create a surface that best fits the data, or use an interpolation type method to construct a response surface that passes exactly through all certain realizations 2 of 29

3 Using regression in the gray zones, incorporating the variability of the uncertain realizations in the least squares regression algorithm, combined with an interpolation type method in all certain areas of the domain II. Motivation Uncertainty quantification methods are commonly used in industrial applications to determine variability and reveal non-linear interactions in quantities of interest. The motivation behind this study is spurred by geometrical uncertainty associated with a rotating Formula 1 tire. The objective is to determine if the shape of the contact patch and the bulge of the tire profile influence the mass capture at the engine intake. Is there an efficient way to quantify the variability of mass capture due to geometrical uncertainties? Monte Carlo methods are immediately out of the question due to the cost of running many simulations, but one popular alternative is stochastic collocation. 9 Two geometric parameters were chosen to define the shape of the tire profile and contact patch and their range of variability was set by geometric constraints (i.e. the camber angle of the wheel could not exceed ±5 based on the suspension arm range of motion). These parameters were represented by uniform random variables, whereby any value was assumed to be equally probable over the interval. Stochastic collocation based on tensor grid polynomial construction with 9 points in each of the two dimensions (d=2) was performed. The abscissas were chosen according to the Clenshaw-Curtis point distribution that included evaluations at the boundaries of the domain. The Clenshaw-Curtis distribution was also chosen due to the nestedness associated with the distribution; with 9 points in each dimension it was possible to compute the solution statistics corresponding to 3 and 5 points, allowing to estimate the error. The total number of computations (or realizations) to perform was 9 d = m = 81. Each realization had approximately 2 million cells in order to ensure a grid independent solution. Stochastic collocation was performed to compute the mean and variance of the mass flow rate through the radiator using the corresponding Clenshaw-Curtis weights. In addition, a probability density function (PDF) of the mass flow rate was generated by uniformly sampling the spline interpolant surface that exactly passed through all the input points. In the case of the Formula 1 tire, a smooth response surface is expected if all realizations converge. This was not the case in this study. Some realizations did not converge, while others had residual levels well above average. As a result, when stochastic collocation was performed, the mean and variance did not converge when comparing the results with 3, 5, and 9 points in each dimension. Also, when the PDF was generated for the mass flow rate, the shape of the PDF was different for all three error estimate levels. Upon further investigation it was observed that some simulations calculated the mass flow rate to be more than two standard deviations away from the mean, and other simulations were completely wrong. The resulting response surface passed through the outliers producing a skewed PDF. The question now remained how to proceed with stochastic collocation and response surface construction if there was uncertainty in the function evaluations. III. Methodology Three important questions need to be addressed prior to analyzing the results of multidimensional response surface construction with uncertainty. The first question that arises is if the distribution of collocation points (abscissa) has a strong impact in the convergence rate of statistics. In order to answer this question four different point distributions are compared - Newton-Cotes, Gauss, Clenshaw-Curtis, and Lobatto. The second question that arises is how to handle the points that are uncertain. Should these points be ignored? In case that credible ranges for the response variable can be determined, should these points still influence the shape of the response surface? Finally, what is the best approximation technique to use that gives the fastest rate of convergence? In order to answer the last question, five different approximation techniques are described and compared:,, Kriging, Padé, and a new stochastic interpolation. The following is a general statement of the problem: 1. Given m function evaluations y( x i ) i = 1,..., m, where x i D R d, and y( x) R q ; x i are the abscissas and y( x) are the responses (quantities of interest) 2. Assume m g gray evaluations are imperfect : y( x j ) = [ y u, y l ] with j = 1,..., m g, where x j D g D R d 3 of 29

4 3. Build the response surface y( x) by approximating the function response of y( x) and computing the statistics, such as the mean, variance, probability of failure, etc. A. Test Function Description Three test surfaces (cosine, Runge, Rosenbrock) are chosen as shown in Fig. 1, along with their analytical expressions. By selecting response surfaces that can be represented analytically, statistical metrics such as the mean and variance over the intervals of interest can be compared to their respective analytical values. Ideally, as the number of abscissas is increased the root mean square error (RMSE) should approach zero f(x,y) f(x,y) f(x,y) y 1 1 x y 1 1 x y 2 2 x 0 2 (a) Cosine Surface: cos 2 x 2 + cos 2 y (b) Runge Surface: 1/((1 + 50x 2 ) + (1 + 50y 2 )) (c) Rosenbrock Surface: (1 x) (y x 2 ) 2 Figure 1. The three analytic test functions used in this study In this study, the accuracy of the response surface compared to the analytical test functions is measured using three statistical metrics which are the RMS error, mean, and variance shown in Eq. 1. RMSE = 1 m (ỹ i y i ) m 2 (1a) i=1 1 m µ µ = ỹ i y i (1b) m i=1 σ 2 σ 2 1 m = (ỹ i µ) 2 (y i µ) 2 (1c) m i=1 Uncertainty is injected into the three analytic test functions by altering the function evaluations that lie inside the gray zone. Two different types of gray zones are used. The first type is in the shape of a circle and centered about the origin for all three test functions. The second type is in the shape of a quarter circle and offset from the origin such that the center of the quarter circle is located at the intersection of the minimum value of each independent variable (See Fig. 2). These two zones suggest an interpolatory and extrapolatory error respectively. The size of the circle and quarter circle is chosen to be, and the size of the full 2D domain. The domain extents are [-1,1] in each dimension for the cosine and Runge surfaces, and [-2,2] in each dimension for the Rosenbrock surface. Any abscissas inside the gray zone are perturbed by an amount proportional to the distance from the epicenter of the uncertain area. The magnitude of the maximum perturbation (which occurs at the origin for the centered gray zone and at [min(x),min(y)] for the offset gray zones) is equivalent to four times the standard deviation of the analytic function inside the gray zone. The analytic expression for the error bound as a function of radial distance away from the origin for all the 4 of 29

5 y y y x x x (a) Cosine Surface (b) Runge Surface (c) Rosenbrock Surface Figure 2. Contour plots showing three different sized centered and offset gray zones (,, and of total area) along with the 23 abscissa s in each dimension (black dots) test surfaces is shown in Eq. (2). E(r) = { 4σ (R r)3 R if r < R 3 0 if r R (2) A 1-dimensional example of the perturbed test function (red dotted line) is shown in Fig. 3; the solid red line denotes the analytical function with no error. The gray zone is inside the interval [a,b], with the epicenter of uncertainty located at (a + b)/2. The perturbed test function is identical to the unperturbed test function outside the [a,b] interval. B. Distribution of Abscissas!"#$%!"# &% '% #% Figure 3. 1-dimensional example showing undisturbed test function (solid red line), and perturbed test function (dotted red line) Four different collocation point (quadrature point) schemes are compared to determine which distribution of abscissas produces the most accurate representation of the true analytical function for a given approximation scheme. For polynomial functions the well-developed approximation theory provides clear guidelines, but in a general situation (e.g. discontinuous response surfaces) it is not clear which distribution is the most advantageous. The first distribution considered is the Newton-Cotes points that produce equally spaced points and leads to the well known Runge phenomenon, thus providing a baseline from which to improve upon. The second point distribution considered is the Gauss distribution; the location of the nodes are determined by solving a tri-diagonal eigenvalue problem. 10 Clenshaw-Curtis points, or Chebyshev points, are also considered and are derived by splitting a semicircle, defined on the interval [a,b], into m-1 (where m is the number of nodes) arcs of equal length. Projecting the endpoints of the arcs onto the x-axis produces the Chebyshev points. It has been shown in the past that Clenshaw-Curtis performance is quite similar to that of Gauss quadrature for non polynomial functions. 11 The last point distribution considered is called the Legendre-Gauss-Lobatto (LGL) points. It is derived by computing the roots of (1 x 2 )P n(x) = 0, where P n denotes the derivative of the Legendre polynomial of degree n. 12 A Matlab script shown below computes the one dimensional point distributions for the four schemes previously described. 5 of 29

6 1 m=9; % Number of Collocation Points 2 x 1=linspace( 1,1,m)'; % Newton Cotes Point Distribution 3 beta=.5./sqrt(1 (2*(1:m 1)).ˆ( 2)); 4 [V,D]=eig(diag(beta,1)+diag(beta, 1)); 5 x 2=diag(D); % Gauss Point Distribution 6 x 3=sort(cos(pi*(0:m 1)'/(m 1))); % Clenshaw Curtis Point Distribution 7 P=zeros(m,m); 8 x old=2; 9 x=x 3; 10 while max(abs(x x old))>eps 11 x old=x; 12 P(:,1)=1; 13 P(:,2)=x; 14 for k=2:(m 1) 15 P(:,k+1)=((2*k 1)*x.*P(:,k) (k 1)*P(:,k 1))/k; 16 end 17 x=x old (x.*p(:,m) P(:,m 1))./(m*P(:,m)); 18 end 19 x 4=sort(x); % Legendre Gauss Lobatto Point Distribution Fig. 4 shows a graphical comparison of the four point distributions detailed in this study for nine collocation points in 1-dimension on the interval [-1,1]. It is clear from the figure that only the Gauss points do not have nodes at the boundaries of the domain, and all schemes are symmetric about the origin. Newton Cotes Gauss Clenshaw Curtis Legendre Gauss Lobatto Figure 4. Graphical comparison of four collocation point distributions in [-1,1] for m=9. C. Response Surface Construction Methodology Four different test cases are considered. The first baseline test case (Perfect) uses the unperturbed analytical test function evaluations to build a response surface. This is achieved by using the approximation schemes described in section D that exactly interpolate all function evaluations. In theory, as the number of collocation points is increased for this case, the RSME should approach zero. The second test case (Erroneous) builds a response surface that interpolates exactly the perturbed function evaluations in the gray zone and unperturbed function evaluations outside the gray zone. This would be considered a worst case scenario, where the errors or impreciseness in the numerical experiment are not acknowledged. In theory, as the number of collocation points is increased for this case, the RMSE convergence should stall. Next, the Partial case completely disregards the imprecise evaluations. Therefore, any collocation points that are inside the gray m g Centered Centered Centered Offset Offset Offset 0 Figure 5. Number of abscissa s that lie inside the centered and offset gray zones (m g) using an LGL tensor grid in two dimensions. The size of the gray zone is, and the size of the full domain. zone are treated as inside the black zone and removed; the resulting response surface exactly interpolates the remaining points which are only inside the white zone. The fourth test case considered (Imperfect) uses the newly developed stochastic approximation, (multivariate interpolation regression), described in section 4 to correct any function evaluations that lie inside the gray zone. Once all uncertain points have been corrected, a response surface is constructed that interpolates exactly the corrected points. The correction algorithm takes into account the uncertainty associated with all function evaluations, and uses a least squares type regression to construct a surface that will not necessarily pass through the function 6 of 29

7 evaluations inside the gray zone, but is strictly interpolatory inside the white zone. To test the convergence behavior of the mean, variance and RMSE, the number of collocation points was varied between 3 and 23 in each dimension for the cases previously mentioned. A response surface was built using the collocation point function evaluations, and the surface was approximated using 101 equally spaced points in each dimension. These 101 points are used to compute the RMSE, mean, and variance. Since the size of the uncertain domain remains fixed when the number of collocation points is varied, the number of points inside the gray zone changes. Fig. 5 shows the number of points that are uncertain (m g ) for varying sizes of offset and centered gray zones (,, and the size of the entire domain). It is apparent that the number of perturbed points increases exponentially as the grid size is increased. D. Details of Approximation Methodology The details of five approximation schemes, hereby referred to as spline,, Kriging, Padé, and are presented. All schemes, except for the Padé interpolation, are able to handle sparse data (i.e. the data does not have to be in a full tensor grid). In addition, only can perform regression incorporating the uncertainty associated with points inside the gray zone. 1. Polynomial Interpolation The first interpolation scheme considered uses a tensor product type polynomial to pass through all function evaluations exactly. In two dimensions the interpolant has the form p(x, y) = a 1 + a 2 x + a 3 y + a 4 xy + a 5 x 2 + a 6 y 2 + a 7 x 2 y + a 8 xy 2 + a 9 x 2 y a ((n+1)2 2)x n y n 1 + a ((n+1)2 1)x n 1 y n + a (n+1) 2x n y n (3) where n is defined as the highest polynomial order represented. The number of collocation points, defined by m in two dimensions, is equivalent to (n + 1) 2. The statement that p interpolates the data points exactly means that p(x i, y i ) = f i for all i { 1, 2,..., (n + 1) 2 1, (n + 1) 2}. (4) In order to solve for the (n + 1) 2 unknown coefficients a i, the Vandermonde type linear system shown in Eq. (5) needs to be solved. 1 x 1 y 1 x 1 y 1... x n 1 1 y1 n x n 1 y1 n 1 x 2 y 2 x 2 y 2... x n 1 2 y2 n x n 2 y2 n x ((n+1) 2 1) y ((n+1) 2 1) x ((n+1) 2 1)y ((n+1) 2 1)... x n 1 1 x (n+1) 2 y (n+1) 2 x (n+1) 2y (n+1) 2... x n 1 (n+1) y n 2 (n+1) x n 2 (n+1) y n 2 (n+1) 2 a 1 f 1 a 2 f 2. =. a ((n+1)2 1) f ((n+1)2 1) a (n+1) 2 f (n+1) 2 ((n+1) 2 1) yn ((n+1) 2 1) x n ((n+1) 2 1) yn ((n+1) 2 1) The size of the left hand side matrix in Eq.(5) is (n+1) 2 by (n+1) 2. In higher dimensions this Vandermonde type matrix becomes very large, and the number of unknowns to solve becomes (n+1) d, where d is defined as the number of dimensions in the problem. The condition number on the Vandermonde matrix may become high, therefore care must be taken when solving this system of linear equations. 13 By the unisolvence theorem, there is a 1-1 map between point vectors and polynomials producing only one unique polynomial of degree n that interpolates n + 1 distinct collocation points exactly. If one point is removed from the (n + 1) (n + 1) tensor grid for a two dimensional problem, there is one extra degree of freedom in the polynomial representation; any of the (n + 1) 2 terms in p(x, y) can be removed. In this study, the highest (5) 7 of 29

8 order terms are removed first. For example, if four collocation points lie in the gray zone for the Partial case, these points need to be completely removed from the interpolation. As a result, the highest order term x n y n is dropped first from p(x, y), followed by x n 1 y n, x n y n 1 and finally x n 1 y n Cubic Interpolation In order to interpolate scattered data using cubic splines, a method based on Delaunay triangulation is employed. 14 The built in spline functions in Matlab are not able to handle 2-dimensional sparse datasets. As a result, the griddata method, which uses the Quickhull algorithm for computing the convex hull, is used for the triangle-based cubic interpolation Kriging Interpolation The Kriging algorithm that has been adopted in this study is based on the DACE Matlab toolbox. 16 Kriging was originally introduced as a geo-statistics interpolation technique. 21 Upon its inception, it was applied to the design and analysis of computer experiments (DACE). 22 It is assumed there are m abscissa (design sites in Kriging Terminology 16 ) x = [ x 1 x m ] with x i R d (where d is the dimensionality of the problem). In this study m is of size N 2 N g, where N is the number of collocation points per dimension. For every design site there is a corresponding set of responses y = [ y 1 y m ] with y i R q (where q is defined as the number of response variables; in this study q=1). Using the DACE formulation, the response function ŷ( x) expresses the deterministic response ŷ( x) R q for a d-dimensional input x D R d, as a realization of a regression model and a stochastic process. The regression model is a linear combination of p functions, where p is dependent on the type of the regression model chosen (see Eq. 7). The stochastic process is defined to have zero mean and variance σf 2 R(θ, w, x), where σf 2 is the process variance for the response and R(θ, w, x) = d exp( θ j (w j x j ) 2 ) (6) j=1 is the correlation model used, based on a Gaussian kernel. This model was chosen for this study because the response surface is assumed to be continuously differentiable. 16 A quadratic regression model is chosen for this study which forces p to be defined by Eq. 7. p = (d + 1)(d + 2) 2 Since d = 2, this automatically constrains the regression model to be a linear combination of p = 6 functions. In general, the quadratic regression model is defined as f(x) = [f 1 (x), f 2 (x),..., f p (x)] (8) (7) where f 1 (x) = 1, f 2 (x) = x 1,..., f d+1 (x) = x d, f d+2 (x) = x 2 1,..., f 2d+1 (x) = x 1 x d f 2d+2 (x) = x 2 2,..., f 3d (x) = x 2 x d,...,..., f (d+1)(d+2) (x) = f p (x) = x 2 d. 2 Using this notation, F = [ f (x 1 ) f (x m )] (9) is an m p regression function matrix at the design sites. Defining R as the m m matrix of stochastic process correlations of the response variable at the design sites, an element in the R matrix is defined as R ij = R(θ, x i, x j ), i, j = 1,..., m. (10) Using this definition and a quadratic regression model, the Kriging interpolation at each untried site x is where γ = R 1 ( y F β ) and β = (F R 1 F ) 1 F R 1 y r(x) = [R(θ, x 1, x) R(θ, x m, x)] ŷ(x) = f(x) β + r(x) γ (11) 8 of 29

9 Once β R p q and γ R m q have been calculated for the all collocation points, they become fixed and it is no longer necessary to recompute them for an untried site. Only f(x) R p and r(x) R m need to be computed for an untried site. Over the past twenty years many variations of Kriging have evolved One of the differences between these techniques is in the way the expectation of the random field, µ(x) =E[Y (x)] is handled. Possible options include simple Kriging that assumes a known constant trend (µ(x) = 0), Ordinary Kriging that assumes an unknown constant trend (µ(x) = µ), and universal Kriging that assumes a general linear trend model (µ(x) = p k=0 β kf(x)). In this study we also considered simple Kriging (or Gaussian process regression). 17 The algorithm for determining the value of the response surface at any point x 0 in the domain is shown in Eq. (12). 1 y 1 c(x 1, x 1 ) c(x 1, x m ) c(x 1, x 0 ) ŷ(x 0 ) = (12) c(x m, x 1 ) c(x m, x m ) c(x m, x 0 ) y m The covariance matrix, c(w, x) for this study is defined by Eq. (13). The covariance that was used is a squared exponential covariance, or Gauss kernel, with characteristic length scale θ. c(w, x) = σ 2 f d exp( θ j (w j x j ) 2 ) + σnδ 2 ij (13) j=1 The variance of the covariance function (σf 2 ) was kept at 1 throughout the study. For simple Kriging the σn 2 parameter (also known as the noise variance) is zero. Since θ is a d 1 vector of correlation parameters, the formulation presented here allows for correlation length anisotropy in different directions. In order to simplify the use of the Kriging method, the correlation parameters are isotropic, independent of the number of collocation points used, and kept constant at 0.5, 2, and 1 for the cosine, Runge, and Rosenbrock surfaces respectively. Optimal θ values can be computed a priori using maximum likelihood estimation. Depending on the response surface, the iterative search scheme to find optimal θ values can prove to be problematic, as is shown to be the case for the Runge surface. If prediction with noisy measurements is performed, and assuming additive independent identically distributed Gaussian noise (with variance σn), 2 it follows from the independence assumption about the noise, that a diagonal matrix is added, in comparison to the noise free case. The problem with this study is that not all measurements are noisy (specifically any measurements outside the gray zone). Therefore only the elements on the diagonal matrix corresponding to the points in the uncertain domain are varied. The question now remains how much additional covariance to add to each uncertain point. If nothing is added to the original covariance, then the response surface will pass exactly through the uncertain point. If a very large number is added to the original covariance then the response surface responds in such a way that it completely disregards the point. The plot of ŷ(x o ) as a function of σn 2 0 at some x 0 in the uncertain domain looks like erf(σn 2 0 ). Using this information, a modified simple Kriging algorithm was developed to incorporate information about the uncertainty associated with input collocation points. is able to incorporate input uncertainty as well, while all the other methods in this paper are not able to incorporate this information. The implementation of the modified simple Kriging algorithm is as follows: 1. Use a simple Kriging response surface that passes exactly through all certain points, ignoring all uncertain points. 2. If the resulting response surface passes through the uncertainty bounds of all the uncertain points, then the response surface is complete. 3. If not, force the response surface to pass through the maximum or minimum bound, and rebuild the simple Kriging response surface. The choice of the response surface passing through the maximum or minimum bound of the uncertain point depends on the original surface in step 1. Figs. 6(a) and 6(b) show how the modified simple Kriging algorithm works. If the uncertainty bound on the one uncertain point in the domain is small (as shown in Fig. 6(a)) the algorithm will proceed to step 3. If the uncertainty is large, as in Fig. 6(b), the algorithm will stop at step 2, because ignoring the point creates a constant response surface that is within the bounds of the uncertain point. 9 of 29

10 !"#$%!"#$% (a) Modified simple Kriging with low uncertainty #% (b) Modified simple Kriging with high uncertainty #% Figure 6. Modified simple Kriging algorithm behavior depending on uncertainty bounds 4. - Multivariate Interpolation Regression The high order multi-variate stochastic approximation scheme 18 for arbitrary data sets, referred to as, is used as interpolation when there is no error associated with function evaluations, and as nonlinear regression when nonzero measurement errors are associated with function evaluations. Every function evaluation (or data point) is represented as a Taylor series, and the high order derivatives in the Taylor series are treated as random variables. The approximation coefficients are then chosen to minimize an objective function at each collocation point by solving an equality constrained least squares problem. It is proposed that the novel scheme converges faster than other methods, including Kriging, when there is uncertainty associated with data points. 5. Padé Interpolation The set of orthogonal Legendre polynomials are used as basis functions for the Padé type interpolation 19 used in this study. Details of the scheme implementation can be found in Chantrasmi et al. 20 IV. Results The results of constructing response surfaces using the five approximation schemes along with the Kriging variants (simple and Modified Kriging) described in part D of section III are presented in this section and divided into four subsections for clarity. Subsection A describes how the statistical convergence is dependent on the distribution of abscissas. It is shown that the LGL, Gauss, and Clenshaw-Curtis abscissas are all comparable, and outperform the Newton-Cotes distribution. Subsection B shows how the statistical metrics converge for the Perfect case (summarized in Table 1), as well as the Erroneous, Partial, and Imperfect cases for the,,, universal Kriging, and Padé approximations. Some insight is given as to why certain approximations outperform others for the cosine surface, while the opposite is true for the Rosenbrock surface; this mainly is due to the choice of basis functions for the approximation, as well as the correlation length scales selected. Subsection C shows the results of using the simple Kriging algorithm for the Perfect, Erroneous, and Partial cases. This section shows that if the simple Kriging is modified to include the uncertainty associated with uncertain function evaluations, the resulting response surface will be exactly the same as the Perfect response surface if an optimal length scale is selected. Finally, the last subsection uses the simple Kriging algorithm to determine if there is an optimal characteristic length scale value that minimizes error when constructing response surfaces with imperfect function evaluations. The correlation length scale is varied to determine if the optimal length scale value changes depending on the size and location of the gray zone. 10 of 29

11 A. Statistics Convergence Dependence on Distribution of Abscissas Four different types of abscissa distributions are considered in this study as described in part B of section III. The results show that the Newton-Cotes distribution yields the highest statistical metric errors for a given test surface. This is to be expected since it has been shown in the past that Newton-Cotes nodes do not converge in the presence of rounding errors and faster convergence can be achieved by clustering the abscissas close to the boundary of the domain of interest as in the case of Gauss and Clenshaw-Curtis nodes. 11 The performance of the three remaining point distributions are all comparable. The location of the LGL nodes and Clenshaw-Curtis nodes are very similar, as a result, the statistical metric convergence for both of these distributions are indistinguishable. The Gauss abscissas in some cases behave better than the others, but in most cases have slightly slower convergence compared to the LGL abscissas. A disadvantage of the Gauss abscissas is the need to extrapolate the response surface in the area between the outermost abscissa and the edge of the domain, given the fact that there is no Gauss abscissa on the boundary of the domain. Since the distance between the outermost abscissa and the boundary of the domain increases when the number of abscissa decreases, a higher percentage of the domain needs to be extrapolated in order to build a response surface that reaches the boundary of the domain. If the number of abscissas is low, extrapolation is not as accurate as interpolation (as will be shown in section B) yielding slower convergence of statistical metrics (a) RMS error convergence (b) Mean convergence (c) Variance convergence Figure 7. Convergence of statistical metrics for the Runge surface as a function of the number of Newton-Cotes collocation points The most unstable point distribution turns out to be Newton-Cotes. Fig. 7 shows how the choice of this abscissa distribution makes the statistics diverge when interpolating the 2D Runge surface. This is a well known problem known as the Runge phenomenon. Since all the other point distributions are very comparable, only the results of the LGL node distribution will be shown henceforward. B. Statistics Convergence The convergence results of the RMS, mean, and variance errors are discussed for the five approximation schemes described in section III. Only the results of the centered gray zone which is the size of the full domain are presented in Figs. 8 to 16 due to the similarity of results for varying sized gray zones. In all cases, for a given location of gray zone, the convergence trends for all three statistical metrics differ only in value, with errors increasing with increasing sized gray zones. The Perfect test case shows that for all three test functions, as the number of LGL abscissas are increased, the RMSE, µ µ, and σ 2 σ 2 metrics decrease, eventually converging to machine zero if a large number of abscissas are used. The performance of each approximation scheme for the cosine surface in order of decreasing performance is, Padé,, DACE-Kriging, and finally spline. The interpolation outperforms the spline interpolation by at least 10 orders of magnitude for all three statistical metrics. The Perfect results for the Runge surface are all very comparable for all five approximation schemes. and DACE-Kriging are typically the best scheme to use depending on the number of abscissa, followed by,, and Padé. DACE-Kriging outperforms when an even number of abscissa are selected. When there is an odd number of points, the response surface will definitely pass through the maximum value of the Runge function (0.5 at the origin), but when an even number of points is selected the approximation 11 of 29

12 that has the lower correlation length (DACE-Kriging) outperforms the other approximation schemes. The Rosenbrock surface Perfect case provides a baseline test to determine which scheme is the most accurate in the presence of local minima and maxima. The results show that the and Padé schemes converge to the analytical values very quickly using five or more abscissas in each dimension. The approximation reaches the same level of convergence as the two previously mentioned schemes if a minimum of 10 abscissas are selected in each dimension. The DACE-Kriging and spline interpolation schemes are not able to achieve the same level of accuracy as the other three schemes. The reason why the and Padé schemes outperform the others is evident based on the way these schemes use polynomial representations and rational approximants respectively as the basis of the response surface. In this case the Rosenbrock function is a fourth order polynomial, meaning that 5 points are needed in each dimension to pass through the surface exactly. Table 1 shows a summary of the Perfect case RMSE for all the approximation schemes presented using 6, 12, and 24 abscissas in each dimension. Table 1. Summary table for the Perfect case showing the RMSE for the three analytical functions using 5, 11, and 23 abscissas in each dimension. Approximation Scheme Abscissas Cosine Runge Rosenbrock e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e+2 Padé e e e e-.17e e e e e+1 Universal Kriging e e e e e e e e e+2 Simple Kriging e e e e e e-4 The Erroneous test case shows that for all three test functions, as the number of LGL abscissas are increased, the RMSE, µ µ, and σ 2 σ 2 metrics do not converge to machine zero. Instead, in every case the statistical metrics asymptotically converge to the incorrect value. This is to be expected since for the Erroneous case the surface has to pass through the imperfect function evaluations. For this case all approximation schemes produce nearly identical response surfaces. Comparing the offset and centered gray zone results for this case shows that the absolute values of the errors are higher for the cosine and Rosenbrock surface offset gray area, and for the Runge surface centered gray area. The explanation for this becomes clear when looking at Fig. 1. This figure shows that the largest variation of the analytical response surface occurs at the cosine and Rosenbrock offset zones and at the Runge centered zone. The manner in which error is artificially added to the unperturbed surfaces forces the gray zone with the largest analytical variance to have the largest error; the statistical metric errors for the Rosenbrock offset gray zone are at least two orders of magnitude higher than the centered gray zone because there is large variation inside the offset quarter circle and low variation in the circle centered at the origin. When the size of the gray zone is increased from to the absolute value of all three metrics increases by half an order of magnitude for the cosine and Runge surfaces, and larger than an order of magnitude for the Rosenbrock surface. If the size of the offset 12 of 29

13 gray zone is increased to include the origin (the maximum of the response surface) in Fig. 1(b) the errors would increase by at least two orders of magnitude. Convergence can be achieved in some cases by ignoring the function evaluations inside the gray zone (Partial case). This can be seen in Figs. 8(c) and 14(c). This figure shows that all approximation schemes except the spline interpolation show the RMS error decrease with increasing number of abscissas. In other cases, as shown by Fig. 11(c), the approximated response surface underestimates the Runge peak, and the statistical metrics do not converge. For the Partial cosine surface, the interpolant is the most accurate followed by DACE-Kriging and. For the Runge surface, the response surface is the least accurate and highly unstable. For the Partial Rosenbrock surface the approximation is much more precise than the or DACE-Kriging response surface by at least five orders of magnitude. The size of the centered gray area for the cosine surface influences the convergence rate of the statistical metrics in such a way that a smaller gray zone produces faster convergence. If the gray area is larger than of the full domain, none of the cosine response surfaces converge. For the offset gray zones, regardless of the selection of surface, none of the Partial response surfaces converge. In fact, in the case of a DACE-Kriging response for the Runge surface, the statistical metrics actually diverge with increasing number of abscissas. The last case considered is Imperfect, whereby any imperfect function evaluations located inside the gray zone are corrected by the approximation. In all the Imperfect plots the approximation data (triangle symbols) is included as well. The computational time required to build a response surface using the scheme is approximately 100 times longer than the other schemes presented in this paper. The computational time required to fix imperfect function evaluations inside the gray zone using the approximation is on the order of 2 to 10 times longer than the other schemes depending on the number of abscissa that lie inside the gray zone. If time constraints are not an issue the most accurate response surface technique uses the approximation for all points inside the domain, regardless of the color of zone. The Imperfect case shows that the accuracy of the response surface can be improved by incorporating the uncertainty as opposed to disregarding it (as shown by the statistical metrics in Figs. 14(c) and 14(d)). If time constraints are a major issue, disregarding the imperfect points, converting the gray zone into a black zone, is in general the least computationally demanding but also the most inaccurate. For the cosine surface with a centered black zone however, the Partial case (Fig. 8(c)) converges faster than the Imperfect case (Fig. 8(d)). The justification for this is due to the approximation overestimating the cosine peak for the Partial case, while the and DACE-Kriging surfaces exactly pass through the analytical cosine response surface without having any knowledge of what occurs inside the black zone. 13 of 29

14 (a) Perfect 4 (b) Erroneous (c) Partial 4 (d) Imperfect Figure 8. RMS error convergence for the cosine surface as a function of the number of LGL collocation points 14 of 29

15 (a) Perfect 6 (b) Erroneous (c) Partial 6 (d) Imperfect Figure 9. Mean convergence for the cosine surface as a function of the number of LGL collocation points 15 of 29

16 (a) Perfect 6 (b) Erroneous (c) Partial 6 (d) Imperfect Figure 10. Variance convergence for the cosine surface as a function of the number of LGL collocation points 16 of 29

17 (a) Perfect (b) Erroneous (c) Partial (d) Imperfect Figure 11. points RMS error convergence for the Runge surface as a function of the number of LGL collocation 17 of 29

18 10 7 (a) Perfect (b) Erroneous 5 0 (c) Partial (d) Imperfect Figure 12. Mean convergence for the Runge surface as a function of the number of LGL collocation points 18 of 29

19 10 7 (a) Perfect (b) Erroneous 5 0 (c) Partial (d) Imperfect Figure 13. Variance convergence for the Runge surface as a function of the number of LGL collocation points 19 of 29

20 (a) Perfect (b) Erroneous (c) Partial (d) Imperfect Figure 14. RMS error convergence for the Rosenbrock surface as a function of the number of LGL collocation points 20 of 29

21 (a) Perfect (b) Erroneous (c) Partial (d) Imperfect Figure 15. points Mean convergence for the Rosenbrock surface as a function of the number of LGL collocation 21 of 29

22 (a) Perfect (b) Erroneous (c) Partial (d) Imperfect Figure 16. points Variance convergence for the Rosenbrock surface as a function of the number of LGL collocation 22 of 29

23 C. Modified Simple Kriging Statistics Convergence The simple Kriging algorithm described in section 3 is used as a starting point so that it can be modified to incorporate the variability associated with imperfect function evaluations. Figs. 17 through 19 show how the statistics converge using the simple Kriging algorithm (denoted by the circle, diamond, and triangle symbols) and the modified simple Kriging algorithm (denoted by star symbols) for the cosine, Runge, and Rosenbrock surfaces respectively. The modified simple Kriging test case is given the Erroneous case points (imperfect function evaluations), as well as an interval that describes the uncertainty associated with each function evaluation. The response surface has equal probability of passing through any point within the interval. In all cases for the modified simple Kriging test case if the amount of perturbation at location x i is given by ɛ, then the interval over which the response surface must pass is given by [f i, f i + 2ɛ], where f i is the analytical value of the response surface at location x i. For the Erroneous case, the response surface always passes through the imperfect function evaluation given by (f i +ɛ) at location x i. The modified simple Kriging algorithm is identical to the simple Kriging algorithm for the case of no variability associated with function evaluations (i.e. the Perfect, Erroneous, and Partial cases). The figures show that the Modified simple Kriging algorithm is fully capable of recovering the Perfect response surface if an optimal length scale is used. Perfect Erroneous Partial Modified Simple Kriging Perfect Erroneous Partial Modified Simple Kriging Perfect Erroneous Partial Modified Simple Kriging RMSE µ µ σ 2 σ Number of Abscissas per Dimension 0 Number of Abscissas per Dimension 0 Number of Abscissas per Dimension (a) (b) Mean Error (c) Variance Error Figure 17. Statistical metric convergence for the cosine surface using a modified simple Kriging algorithm with a centered gray zone Perfect Erroneous Partial Modified Simple Kriging Perfect Erroneous Partial Modified Simple Kriging Perfect Erroneous Partial Modified Simple Kriging RMSE µ µ σ 2 σ 2 Number of Abscissas per Dimension (a) Number of Abscissas per Dimension (b) Mean Error Number of Abscissas per Dimension (c) Variance Error Figure 18. Statistical metric convergence for the Runge surface using a modified simple Kriging algorithm with a centered gray zone Using a modified simple Kriging algorithm is much more advantageous than completely ignoring the imperfect function evaluations. This is evident by the fact that the modified simple Kriging lines always have a much faster rate of convergence compared to the Partial lines. In some cases (Fig. 18) the Partial case does not converge at all, while the modified simple Kriging algorithm does converge. When comparing the modified or simple Kriging results to the other methods described in part D of section III, the absolute value of the RMS, mean, and variance error is on average an order of magnitude higher for the simple Kriging variants. Universal Kriging outperforms simple Kriging which is to be expected 23 of 29

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Contents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information

Contents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information Contents Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information xi xiv xvii xix 1 Preliminaries 1 1.0 Introduction.............................

More information

Mustafa H. Tongarlak Bruce E. Ankenman Barry L. Nelson

Mustafa H. Tongarlak Bruce E. Ankenman Barry L. Nelson Proceedings of the 0 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, eds. RELATIVE ERROR STOCHASTIC KRIGING Mustafa H. Tongarlak Bruce E. Ankenman Barry L.

More information

Accuracy, Precision and Efficiency in Sparse Grids

Accuracy, Precision and Efficiency in Sparse Grids John, Information Technology Department, Virginia Tech.... http://people.sc.fsu.edu/ jburkardt/presentations/ sandia 2009.pdf... Computer Science Research Institute, Sandia National Laboratory, 23 July

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids Covering : Accuracy, Precision and Efficiency in Sparse Grids https://people.sc.fsu.edu/ jburkardt/presentations/auburn 2009.pdf... John Interdisciplinary Center for Applied Mathematics & Information Technology

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Quadrature for Uncertainty Analysis Stochastic Collocation. What does quadrature have to do with uncertainty?

Quadrature for Uncertainty Analysis Stochastic Collocation. What does quadrature have to do with uncertainty? Quadrature for Uncertainty Analysis Stochastic Collocation What does quadrature have to do with uncertainty? Quadrature for Uncertainty Analysis Stochastic Collocation What does quadrature have to do with

More information

Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models

Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models Benjamin Peherstorfer a,, Boris Kramer a, Karen Willcox a a Department of Aeronautics

More information

A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis

A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis Portland State University PDXScholar University Honors Theses University Honors College 2014 A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis Edison Tsai Portland

More information

Robustness of Principal Components

Robustness of Principal Components PCA for Clustering An objective of principal components analysis is to identify linear combinations of the original variables that are useful in accounting for the variation in those original variables.

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

In manycomputationaleconomicapplications, one must compute thede nite n

In manycomputationaleconomicapplications, one must compute thede nite n Chapter 6 Numerical Integration In manycomputationaleconomicapplications, one must compute thede nite n integral of a real-valued function f de ned on some interval I of

More information

Numerical Integration (Quadrature) Another application for our interpolation tools!

Numerical Integration (Quadrature) Another application for our interpolation tools! Numerical Integration (Quadrature) Another application for our interpolation tools! Integration: Area under a curve Curve = data or function Integrating data Finite number of data points spacing specified

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

Machine Learning Applied to 3-D Reservoir Simulation

Machine Learning Applied to 3-D Reservoir Simulation Machine Learning Applied to 3-D Reservoir Simulation Marco A. Cardoso 1 Introduction The optimization of subsurface flow processes is important for many applications including oil field operations and

More information

GENG2140, S2, 2012 Week 7: Curve fitting

GENG2140, S2, 2012 Week 7: Curve fitting GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

Slow Growth for Gauss Legendre Sparse Grids

Slow Growth for Gauss Legendre Sparse Grids Slow Growth for Gauss Legendre Sparse Grids John Burkardt, Clayton Webster April 4, 2014 Abstract A sparse grid for multidimensional quadrature can be constructed from products of 1D rules. For multidimensional

More information

Outline Introduction OLS Design of experiments Regression. Metamodeling. ME598/494 Lecture. Max Yi Ren

Outline Introduction OLS Design of experiments Regression. Metamodeling. ME598/494 Lecture. Max Yi Ren 1 / 34 Metamodeling ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 1, 2015 2 / 34 1. preliminaries 1.1 motivation 1.2 ordinary least square 1.3 information

More information

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 2015 http://www.astro.cornell.edu/~cordes/a6523 Lecture 23:! Nonlinear least squares!! Notes Modeling2015.pdf on course

More information

Slow Exponential Growth for Clenshaw Curtis Sparse Grids jburkardt/presentations/... slow growth paper.

Slow Exponential Growth for Clenshaw Curtis Sparse Grids   jburkardt/presentations/... slow growth paper. Slow Exponential Growth for Clenshaw Curtis Sparse Grids http://people.sc.fsu.edu/ jburkardt/presentations/...... slow growth paper.pdf John Burkardt, Clayton Webster April 30, 2014 Abstract A widely used

More information

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester

Physics 403. Segev BenZvi. Numerical Methods, Maximum Likelihood, and Least Squares. Department of Physics and Astronomy University of Rochester Physics 403 Numerical Methods, Maximum Likelihood, and Least Squares Segev BenZvi Department of Physics and Astronomy University of Rochester Table of Contents 1 Review of Last Class Quadratic Approximation

More information

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project

Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Performance Comparison of K-Means and Expectation Maximization with Gaussian Mixture Models for Clustering EE6540 Final Project Devin Cornell & Sushruth Sastry May 2015 1 Abstract In this article, we explore

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

Slow Exponential Growth for Clenshaw Curtis Sparse Grids

Slow Exponential Growth for Clenshaw Curtis Sparse Grids Slow Exponential Growth for Clenshaw Curtis Sparse Grids John Burkardt Interdisciplinary Center for Applied Mathematics & Information Technology Department Virginia Tech... http://people.sc.fsu.edu/ burkardt/presentations/sgmga

More information

An Empirical Chaos Expansion Method for Uncertainty Quantification

An Empirical Chaos Expansion Method for Uncertainty Quantification An Empirical Chaos Expansion Method for Uncertainty Quantification Melvin Leok and Gautam Wilkins Abstract. Uncertainty quantification seeks to provide a quantitative means to understand complex systems

More information

Empirical Models Interpolation Polynomial Models

Empirical Models Interpolation Polynomial Models Mathematical Modeling Lia Vas Empirical Models Interpolation Polynomial Models Lagrange Polynomial. Recall that two points (x 1, y 1 ) and (x 2, y 2 ) determine a unique line y = ax + b passing them (obtained

More information

Gradient Enhanced Universal Kriging Model for Uncertainty Propagation in Nuclear Engineering

Gradient Enhanced Universal Kriging Model for Uncertainty Propagation in Nuclear Engineering Gradient Enhanced Universal Kriging Model for Uncertainty Propagation in Nuclear Engineering Brian A. Lockwood 1 and Mihai Anitescu 2 1 Department of Mechanical Engineering University of Wyoming 2 Mathematics

More information

Statistical Data Analysis

Statistical Data Analysis DS-GA 0 Lecture notes 8 Fall 016 1 Descriptive statistics Statistical Data Analysis In this section we consider the problem of analyzing a set of data. We describe several techniques for visualizing the

More information

PROJECTION METHODS FOR DYNAMIC MODELS

PROJECTION METHODS FOR DYNAMIC MODELS PROJECTION METHODS FOR DYNAMIC MODELS Kenneth L. Judd Hoover Institution and NBER June 28, 2006 Functional Problems Many problems involve solving for some unknown function Dynamic programming Consumption

More information

Confidence Estimation Methods for Neural Networks: A Practical Comparison

Confidence Estimation Methods for Neural Networks: A Practical Comparison , 6-8 000, Confidence Estimation Methods for : A Practical Comparison G. Papadopoulos, P.J. Edwards, A.F. Murray Department of Electronics and Electrical Engineering, University of Edinburgh Abstract.

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Dimension-adaptive sparse grid for industrial applications using Sobol variances

Dimension-adaptive sparse grid for industrial applications using Sobol variances Master of Science Thesis Dimension-adaptive sparse grid for industrial applications using Sobol variances Heavy gas flow over a barrier March 11, 2015 Ad Dimension-adaptive sparse grid for industrial

More information

STOCHASTIC SAMPLING METHODS

STOCHASTIC SAMPLING METHODS STOCHASTIC SAMPLING METHODS APPROXIMATING QUANTITIES OF INTEREST USING SAMPLING METHODS Recall that quantities of interest often require the evaluation of stochastic integrals of functions of the solutions

More information

Comparison of Modern Stochastic Optimization Algorithms

Comparison of Modern Stochastic Optimization Algorithms Comparison of Modern Stochastic Optimization Algorithms George Papamakarios December 214 Abstract Gradient-based optimization methods are popular in machine learning applications. In large-scale problems,

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

ENGRG Introduction to GIS

ENGRG Introduction to GIS ENGRG 59910 Introduction to GIS Michael Piasecki October 13, 2017 Lecture 06: Spatial Analysis Outline Today Concepts What is spatial interpolation Why is necessary Sample of interpolation (size and pattern)

More information

Third Edition. William H. Press. Raymer Chair in Computer Sciences and Integrative Biology The University of Texas at Austin. Saul A.

Third Edition. William H. Press. Raymer Chair in Computer Sciences and Integrative Biology The University of Texas at Austin. Saul A. NUMERICAL RECIPES The Art of Scientific Computing Third Edition William H. Press Raymer Chair in Computer Sciences and Integrative Biology The University of Texas at Austin Saul A. Teukolsky Hans A. Bethe

More information

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA

MACHINE LEARNING. Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 1 MACHINE LEARNING Methods for feature extraction and reduction of dimensionality: Probabilistic PCA and kernel PCA 2 Practicals Next Week Next Week, Practical Session on Computer Takes Place in Room GR

More information

Evaluation of Non-Intrusive Approaches for Wiener-Askey Generalized Polynomial Chaos

Evaluation of Non-Intrusive Approaches for Wiener-Askey Generalized Polynomial Chaos 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference 6t 7 - April 28, Schaumburg, IL AIAA 28-892 Evaluation of Non-Intrusive Approaches for Wiener-Askey Generalized

More information

Differentiation and Integration

Differentiation and Integration Differentiation and Integration (Lectures on Numerical Analysis for Economists II) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 12, 2018 1 University of Pennsylvania 2 Boston College Motivation

More information

Fast Numerical Methods for Stochastic Computations

Fast Numerical Methods for Stochastic Computations Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:

More information

Preliminary Examination in Numerical Analysis

Preliminary Examination in Numerical Analysis Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify

More information

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation

Outline. 1 Numerical Integration. 2 Numerical Differentiation. 3 Richardson Extrapolation Outline Numerical Integration Numerical Differentiation Numerical Integration Numerical Differentiation 3 Michael T. Heath Scientific Computing / 6 Main Ideas Quadrature based on polynomial interpolation:

More information

Advanced analysis and modelling tools for spatial environmental data. Case study: indoor radon data in Switzerland

Advanced analysis and modelling tools for spatial environmental data. Case study: indoor radon data in Switzerland EnviroInfo 2004 (Geneva) Sh@ring EnviroInfo 2004 Advanced analysis and modelling tools for spatial environmental data. Case study: indoor radon data in Switzerland Mikhail Kanevski 1, Michel Maignan 1

More information

Linear Models for Regression. Sargur Srihari

Linear Models for Regression. Sargur Srihari Linear Models for Regression Sargur srihari@cedar.buffalo.edu 1 Topics in Linear Regression What is regression? Polynomial Curve Fitting with Scalar input Linear Basis Function Models Maximum Likelihood

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.2 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

Linear Regression (9/11/13)

Linear Regression (9/11/13) STA561: Probabilistic machine learning Linear Regression (9/11/13) Lecturer: Barbara Engelhardt Scribes: Zachary Abzug, Mike Gloudemans, Zhuosheng Gu, Zhao Song 1 Why use linear regression? Figure 1: Scatter

More information

CHAPTER 10 Zeros of Functions

CHAPTER 10 Zeros of Functions CHAPTER 10 Zeros of Functions An important part of the maths syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of problems

More information

Function Approximation

Function Approximation 1 Function Approximation This is page i Printer: Opaque this 1.1 Introduction In this chapter we discuss approximating functional forms. Both in econometric and in numerical problems, the need for an approximating

More information

Contents. I Basic Methods 13

Contents. I Basic Methods 13 Preface xiii 1 Introduction 1 I Basic Methods 13 2 Convergent and Divergent Series 15 2.1 Introduction... 15 2.1.1 Power series: First steps... 15 2.1.2 Further practical aspects... 17 2.2 Differential

More information

Uncertainty Quantification and Validation Using RAVEN. A. Alfonsi, C. Rabiti. Risk-Informed Safety Margin Characterization. https://lwrs.inl.

Uncertainty Quantification and Validation Using RAVEN. A. Alfonsi, C. Rabiti. Risk-Informed Safety Margin Characterization. https://lwrs.inl. Risk-Informed Safety Margin Characterization Uncertainty Quantification and Validation Using RAVEN https://lwrs.inl.gov A. Alfonsi, C. Rabiti North Carolina State University, Raleigh 06/28/2017 Assumptions

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration. Unit IV Numerical Integration and Differentiation Numerical integration and differentiation quadrature classical formulas for equally spaced nodes improper integrals Gaussian quadrature and orthogonal

More information

Introduction to Uncertainty Quantification in Computational Science Handout #3

Introduction to Uncertainty Quantification in Computational Science Handout #3 Introduction to Uncertainty Quantification in Computational Science Handout #3 Gianluca Iaccarino Department of Mechanical Engineering Stanford University June 29 - July 1, 2009 Scuola di Dottorato di

More information

A Polynomial Chaos Approach to Robust Multiobjective Optimization

A Polynomial Chaos Approach to Robust Multiobjective Optimization A Polynomial Chaos Approach to Robust Multiobjective Optimization Silvia Poles 1, Alberto Lovison 2 1 EnginSoft S.p.A., Optimization Consulting Via Giambellino, 7 35129 Padova, Italy s.poles@enginsoft.it

More information

NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS. Max Gunzburger

NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS. Max Gunzburger NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS Max Gunzburger Department of Scientific Computing Florida State University North Carolina State University, March 10, 2011

More information

Stochastic optimization - how to improve computational efficiency?

Stochastic optimization - how to improve computational efficiency? Stochastic optimization - how to improve computational efficiency? Christian Bucher Center of Mechanics and Structural Dynamics Vienna University of Technology & DYNARDO GmbH, Vienna Presentation at Czech

More information

Scalable kernel methods and their use in black-box optimization

Scalable kernel methods and their use in black-box optimization with derivatives Scalable kernel methods and their use in black-box optimization David Eriksson Center for Applied Mathematics Cornell University dme65@cornell.edu November 9, 2018 1 2 3 4 1/37 with derivatives

More information

DIRECT AND INDIRECT LEAST SQUARES APPROXIMATING POLYNOMIALS FOR THE FIRST DERIVATIVE FUNCTION

DIRECT AND INDIRECT LEAST SQUARES APPROXIMATING POLYNOMIALS FOR THE FIRST DERIVATIVE FUNCTION Applied Probability Trust (27 October 2016) DIRECT AND INDIRECT LEAST SQUARES APPROXIMATING POLYNOMIALS FOR THE FIRST DERIVATIVE FUNCTION T. VAN HECKE, Ghent University Abstract Finite difference methods

More information

Prediction of double gene knockout measurements

Prediction of double gene knockout measurements Prediction of double gene knockout measurements Sofia Kyriazopoulou-Panagiotopoulou sofiakp@stanford.edu December 12, 2008 Abstract One way to get an insight into the potential interaction between a pair

More information

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20

Exam 2. Average: 85.6 Median: 87.0 Maximum: Minimum: 55.0 Standard Deviation: Numerical Methods Fall 2011 Lecture 20 Exam 2 Average: 85.6 Median: 87.0 Maximum: 100.0 Minimum: 55.0 Standard Deviation: 10.42 Fall 2011 1 Today s class Multiple Variable Linear Regression Polynomial Interpolation Lagrange Interpolation Newton

More information

A Spectral Approach to Linear Bayesian Updating

A Spectral Approach to Linear Bayesian Updating A Spectral Approach to Linear Bayesian Updating Oliver Pajonk 1,2, Bojana V. Rosic 1, Alexander Litvinenko 1, and Hermann G. Matthies 1 1 Institute of Scientific Computing, TU Braunschweig, Germany 2 SPT

More information

Zeros of Functions. Chapter 10

Zeros of Functions. Chapter 10 Chapter 10 Zeros of Functions An important part of the mathematics syllabus in secondary school is equation solving. This is important for the simple reason that equations are important a wide range of

More information

Bayesian Methods for Machine Learning

Bayesian Methods for Machine Learning Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 9 Initial Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling

Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling Kriging models with Gaussian processes - covariance function estimation and impact of spatial sampling François Bachoc former PhD advisor: Josselin Garnier former CEA advisor: Jean-Marc Martinez Department

More information

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn Review Taylor Series and Error Analysis Roots of Equations Linear Algebraic Equations Optimization Numerical Differentiation and Integration Ordinary Differential Equations Partial Differential Equations

More information

Sensitivity analysis using the Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will

Sensitivity analysis using the Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will Lectures Sensitivity analysis using the Metamodel of Optimal Prognosis Thomas Most & Johannes Will presented at the Weimar Optimization and Stochastic Days 2011 Source: www.dynardo.de/en/library Sensitivity

More information

Is My CFD Mesh Adequate? A Quantitative Answer

Is My CFD Mesh Adequate? A Quantitative Answer Is My CFD Mesh Adequate? A Quantitative Answer Krzysztof J. Fidkowski Gas Dynamics Research Colloqium Aerospace Engineering Department University of Michigan January 26, 2011 K.J. Fidkowski (UM) GDRC 2011

More information

Introduction to Numerical Analysis

Introduction to Numerical Analysis Université de Liège Faculté des Sciences Appliquées Introduction to Numerical Analysis Edition 2015 Professor Q. Louveaux Department of Electrical Engineering and Computer Science Montefiore Institute

More information

Dinesh Kumar, Mehrdad Raisee and Chris Lacor

Dinesh Kumar, Mehrdad Raisee and Chris Lacor Dinesh Kumar, Mehrdad Raisee and Chris Lacor Fluid Mechanics and Thermodynamics Research Group Vrije Universiteit Brussel, BELGIUM dkumar@vub.ac.be; m_raisee@yahoo.com; chris.lacor@vub.ac.be October, 2014

More information

SOLVING POWER AND OPTIMAL POWER FLOW PROBLEMS IN THE PRESENCE OF UNCERTAINTY BY AFFINE ARITHMETIC

SOLVING POWER AND OPTIMAL POWER FLOW PROBLEMS IN THE PRESENCE OF UNCERTAINTY BY AFFINE ARITHMETIC SOLVING POWER AND OPTIMAL POWER FLOW PROBLEMS IN THE PRESENCE OF UNCERTAINTY BY AFFINE ARITHMETIC Alfredo Vaccaro RTSI 2015 - September 16-18, 2015, Torino, Italy RESEARCH MOTIVATIONS Power Flow (PF) and

More information

Divergence Formulation of Source Term

Divergence Formulation of Source Term Preprint accepted for publication in Journal of Computational Physics, 2012 http://dx.doi.org/10.1016/j.jcp.2012.05.032 Divergence Formulation of Source Term Hiroaki Nishikawa National Institute of Aerospace,

More information

A short introduction to INLA and R-INLA

A short introduction to INLA and R-INLA A short introduction to INLA and R-INLA Integrated Nested Laplace Approximation Thomas Opitz, BioSP, INRA Avignon Workshop: Theory and practice of INLA and SPDE November 7, 2018 2/21 Plan for this talk

More information

A Study of Covariances within Basic and Extended Kalman Filters

A Study of Covariances within Basic and Extended Kalman Filters A Study of Covariances within Basic and Extended Kalman Filters David Wheeler Kyle Ingersoll December 2, 2013 Abstract This paper explores the role of covariance in the context of Kalman filters. The underlying

More information

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

IDL Advanced Math & Stats Module

IDL Advanced Math & Stats Module IDL Advanced Math & Stats Module Regression List of Routines and Functions Multiple Linear Regression IMSL_REGRESSORS IMSL_MULTIREGRESS IMSL_MULTIPREDICT Generates regressors for a general linear model.

More information

5.1 2D example 59 Figure 5.1: Parabolic velocity field in a straight two-dimensional pipe. Figure 5.2: Concentration on the input boundary of the pipe. The vertical axis corresponds to r 2 -coordinate,

More information

Final Report: DE-FG02-95ER25239 Spectral Representations of Uncertainty: Algorithms and Applications

Final Report: DE-FG02-95ER25239 Spectral Representations of Uncertainty: Algorithms and Applications Final Report: DE-FG02-95ER25239 Spectral Representations of Uncertainty: Algorithms and Applications PI: George Em Karniadakis Division of Applied Mathematics, Brown University April 25, 2005 1 Objectives

More information

Multiple realizations: Model variance and data uncertainty

Multiple realizations: Model variance and data uncertainty Stanford Exploration Project, Report 108, April 29, 2001, pages 1?? Multiple realizations: Model variance and data uncertainty Robert G. Clapp 1 ABSTRACT Geophysicists typically produce a single model,

More information

Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools:

Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools: CS 322 Final Exam Friday 18 May 2007 150 minutes Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools: (A) Runge-Kutta 4/5 Method (B) Condition

More information

In the Name of God. Lectures 15&16: Radial Basis Function Networks

In the Name of God. Lectures 15&16: Radial Basis Function Networks 1 In the Name of God Lectures 15&16: Radial Basis Function Networks Some Historical Notes Learning is equivalent to finding a surface in a multidimensional space that provides a best fit to the training

More information

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations An Extended Abstract submitted for the 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada January 26 Preferred Session Topic: Uncertainty quantification and stochastic methods for CFD A Non-Intrusive

More information

Measurement And Uncertainty

Measurement And Uncertainty Measurement And Uncertainty Based on Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, NIST Technical Note 1297, 1994 Edition PHYS 407 1 Measurement approximates or

More information

Do not copy, post, or distribute

Do not copy, post, or distribute 14 CORRELATION ANALYSIS AND LINEAR REGRESSION Assessing the Covariability of Two Quantitative Properties 14.0 LEARNING OBJECTIVES In this chapter, we discuss two related techniques for assessing a possible

More information

Gaussian Processes for Machine Learning

Gaussian Processes for Machine Learning Gaussian Processes for Machine Learning Carl Edward Rasmussen Max Planck Institute for Biological Cybernetics Tübingen, Germany carl@tuebingen.mpg.de Carlos III, Madrid, May 2006 The actual science of

More information

Probability and Information Theory. Sargur N. Srihari

Probability and Information Theory. Sargur N. Srihari Probability and Information Theory Sargur N. srihari@cedar.buffalo.edu 1 Topics in Probability and Information Theory Overview 1. Why Probability? 2. Random Variables 3. Probability Distributions 4. Marginal

More information

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so.

Midterm. Introduction to Machine Learning. CS 189 Spring Please do not open the exam before you are instructed to do so. CS 89 Spring 07 Introduction to Machine Learning Midterm Please do not open the exam before you are instructed to do so. The exam is closed book, closed notes except your one-page cheat sheet. Electronic

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

CAM Ph.D. Qualifying Exam in Numerical Analysis CONTENTS

CAM Ph.D. Qualifying Exam in Numerical Analysis CONTENTS CAM Ph.D. Qualifying Exam in Numerical Analysis CONTENTS Preliminaries Round-off errors and computer arithmetic, algorithms and convergence Solutions of Equations in One Variable Bisection method, fixed-point

More information

Comparison between Interval and Fuzzy Load Flow Methods Considering Uncertainty

Comparison between Interval and Fuzzy Load Flow Methods Considering Uncertainty Comparison between Interval and Fuzzy Load Flow Methods Considering Uncertainty T.Srinivasarao, 2 P.Mallikarajunarao Department of Electrical Engineering, College of Engineering (A), Andhra University,

More information

Linear Models 1. Isfahan University of Technology Fall Semester, 2014

Linear Models 1. Isfahan University of Technology Fall Semester, 2014 Linear Models 1 Isfahan University of Technology Fall Semester, 2014 References: [1] G. A. F., Seber and A. J. Lee (2003). Linear Regression Analysis (2nd ed.). Hoboken, NJ: Wiley. [2] A. C. Rencher and

More information

Supplementary Note on Bayesian analysis

Supplementary Note on Bayesian analysis Supplementary Note on Bayesian analysis Structured variability of muscle activations supports the minimal intervention principle of motor control Francisco J. Valero-Cuevas 1,2,3, Madhusudhan Venkadesan

More information

arxiv: v1 [physics.comp-ph] 22 Jul 2010

arxiv: v1 [physics.comp-ph] 22 Jul 2010 Gaussian integration with rescaling of abscissas and weights arxiv:007.38v [physics.comp-ph] 22 Jul 200 A. Odrzywolek M. Smoluchowski Institute of Physics, Jagiellonian University, Cracov, Poland Abstract

More information