TECHNISCHE UNIVERSITÄT MÜNCHEN. Uncertainty Quantification in Fluid Flows via Polynomial Chaos Methodologies

Similar documents
Algorithms for Uncertainty Quantification

A Study on Numerical Solution to the Incompressible Navier-Stokes Equation

fluid mechanics as a prominent discipline of application for numerical

Numerical methods for the Navier- Stokes equations

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations

Game Physics. Game and Media Technology Master Program - Utrecht University. Dr. Nicolas Pronost

Math background. Physics. Simulation. Related phenomena. Frontiers in graphics. Rigid fluids

Soft Bodies. Good approximation for hard ones. approximation breaks when objects break, or deform. Generalization: soft (deformable) bodies

Fluid Animation. Christopher Batty November 17, 2011

Chapter 9: Differential Analysis

An Overview of Fluid Animation. Christopher Batty March 11, 2014

Multilevel stochastic collocations with dimensionality reduction

Chapter 9: Differential Analysis of Fluid Flow

Fluid Dynamics: Theory, Computation, and Numerical Simulation Second Edition

Chaospy: A modular implementation of Polynomial Chaos expansions and Monte Carlo methods

7 The Navier-Stokes Equations

Fluid Mechanics Prof. T.I. Eldho Department of Civil Engineering Indian Institute of Technology, Bombay. Lecture - 17 Laminar and Turbulent flows

2. FLUID-FLOW EQUATIONS SPRING 2019

Fundamentals of Fluid Dynamics: Elementary Viscous Flow

Conservation of Mass. Computational Fluid Dynamics. The Equations Governing Fluid Motion

CHAPTER 7 SEVERAL FORMS OF THE EQUATIONS OF MOTION

V (r,t) = i ˆ u( x, y,z,t) + ˆ j v( x, y,z,t) + k ˆ w( x, y, z,t)

PDE Solvers for Fluid Flow

LEAST-SQUARES FINITE ELEMENT MODELS

Fast Numerical Methods for Stochastic Computations

Computational Astrophysics

Due Tuesday, November 23 nd, 12:00 midnight

Probability density function (PDF) methods 1,2 belong to the broader family of statistical approaches

UNIVERSITY of LIMERICK

( ) Notes. Fluid mechanics. Inviscid Euler model. Lagrangian viewpoint. " = " x,t,#, #

Review of fluid dynamics

Chapter 5. The Differential Forms of the Fundamental Laws

Stochastic Particle Methods for Rarefied Gases

Euler equation and Navier-Stokes equation

Stochastic structural dynamic analysis with random damping parameters

Viscous Fluids. Amanda Meier. December 14th, 2011

Diffusion / Parabolic Equations. PHY 688: Numerical Methods for (Astro)Physics

Fluid Dynamics Exercises and questions for the course

Modeling, Simulating and Rendering Fluids. Thanks to Ron Fediw et al, Jos Stam, Henrik Jensen, Ryan

DUE: WEDS MARCH 26TH 2018

Application of Chimera Grids in Rotational Flow

Zonal modelling approach in aerodynamic simulation

Uncertainty Quantification in Computational Science

Estimating functional uncertainty using polynomial chaos and adjoint equations

Implementation of test scenarios for incompressible flow using a divergence free finite element approach

Differential relations for fluid flow

A Polynomial Chaos Approach to Robust Multiobjective Optimization

n i,j+1/2 q i,j * qi+1,j * S i+1/2,j

Principles of Convection

Topics in Fluid Dynamics: Classical physics and recent mathematics

Dimension-adaptive sparse grid for industrial applications using Sobol variances

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

The behaviour of high Reynolds flows in a driven cavity

Introduction to Fluid Dynamics

Scientific Computing II

The JHU Turbulence Databases (JHTDB)

Stochastic Spectral Approaches to Bayesian Inference

Fluid Mechanics II Viscosity and shear stresses

Lecture 8: Tissue Mechanics

Introduction to Uncertainty Quantification in Computational Science Handout #3

Convective Heat and Mass Transfer Prof. A. W. Date Department of Mechanical Engineering Indian Institute of Technology, Bombay

OpenFOAM selected solver

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

Polynomial chaos expansions for sensitivity analysis

Sequential Monte Carlo Samplers for Applications in High Dimensions

Computational Fluid Dynamics 2

Hypocoercivity and Sensitivity Analysis in Kinetic Equations and Uncertainty Quantification October 2 nd 5 th

Supplementary Information for Engineering and Analysis of Surface Interactions in a Microfluidic Herringbone Micromixer

An Empirical Chaos Expansion Method for Uncertainty Quantification

Scientific Computing I

8 A pseudo-spectral solution to the Stokes Problem

Basic concepts in viscous flow

Introduction. J.M. Burgers Center Graduate Course CFD I January Least-Squares Spectral Element Methods

Physics-Based Animation

Basic hydrodynamics. David Gurarie. 1 Newtonian fluids: Euler and Navier-Stokes equations

2 D.D. Joseph To make things simple, consider flow in two dimensions over a body obeying the equations ρ ρ v = 0;

Spectral Representation of Random Processes

Entropy generation and transport

Dan s Morris s Notes on Stable Fluids (Jos Stam, SIGGRAPH 1999)

Homework 4 in 5C1212; Part A: Incompressible Navier- Stokes, Finite Volume Methods

Interpreting Differential Equations of Transport Phenomena

Pressure corrected SPH for fluid animation

Lecture 2: Fundamentals 2/3: Navier-Stokes Equation + Basics of Discretization

LARGE EDDY SIMULATION OF MASS TRANSFER ACROSS AN AIR-WATER INTERFACE AT HIGH SCHMIDT NUMBERS

2.3 The Turbulent Flat Plate Boundary Layer

Computation of Incompressible Flows: SIMPLE and related Algorithms

JMBC Computational Fluid Dynamics I Exercises by A.E.P. Veldman

Pressure-velocity correction method Finite Volume solution of Navier-Stokes equations Exercise: Finish solving the Navier Stokes equations

Uncertainty Quantification and Validation Using RAVEN. A. Alfonsi, C. Rabiti. Risk-Informed Safety Margin Characterization.

Uncertainty Quantification in Viscous Hypersonic Flows using Gradient Information and Surrogate Modeling

2 Equations of Motion

Introduction. Finite and Spectral Element Methods Using MATLAB. Second Edition. C. Pozrikidis. University of Massachusetts Amherst, USA

Simulating Drag Crisis for a Sphere Using Skin Friction Boundary Conditions

External and Internal Incompressible Viscous Flows Computation using Taylor Series Expansion and Least Square based Lattice Boltzmann Method

Turbulence is a ubiquitous phenomenon in environmental fluid mechanics that dramatically affects flow structure and mixing.

Uncertainty quantification for RANS simulation of flow over a wavy wall

7. Basics of Turbulent Flow Figure 1.

Large Scale Fluid-Structure Interaction by coupling OpenFOAM with external codes

Burgers equation - a first look at fluid mechanics and non-linear partial differential equations

Lecture 14. Turbulent Combustion. We know what a turbulent flow is, when we see it! it is characterized by disorder, vorticity and mixing.

Transcription:

TECHNISCHE UNIVERSITÄT MÜNCHEN Bachelor s Thesis in Engineering Science Uncertainty Quantification in Fluid Flows via Polynomial Chaos Methodologies Jan Sültemeyer DEPARTMENT OF INFORMATICS MUNICH SCHOOL OF ENGINEERING

TECHNISCHE UNIVERSITÄT MÜNCHEN Bachelor s Thesis in Engineering Science Uncertainty Quantification in Fluid Flows via Polynomial Chaos Methodologies Uncertainty Quantification in Fluidströmungen mit Polynomial Chaos Methoden Author: Jan Sültemeyer Supervisor: Prof. Dr. Hans-Joachim Bungartz Advisor: Ionut-Gabriel Farcas, M.Sc. Submission Date: 02.06.2016 DEPARTMENT OF INFORMATICS MUNICH SCHOOL OF ENGINEERING

I confirm that this bachelor s thesis is my own work and I have documented all sources and material used. Munich, 02.06.2016 Jan Sültemeyer

Acknowledgments First of all, I would like to thank Prof. Hans-Joachim Bungartz for giving me the opportunity to write this thesis at the chair of Scientific Computing. Most of all, I would like to thank my supervisor, Ionut Farcas, who always gave me good advice whenever I ran into problems on the way. His feedback helped me a lot. Also, I thank Christoph Kowitz for providing me with the CFD-Lab code, and Philipp Neumann for explaining its functionalities to me. Moreover, I would like to thank Tobias Neckel for helping me find this interesting topic. Many thanks to my family, my friends, and my girlfriend for their constant encouragement and support. I really appreciate it and would not have been able to carry out this work without them.

Abstract The focus of this thesis lies on the comparison of two representative methods for uncertainty propagation in computational fluid dynamics. They are from the family of polynomial chaos methodologies one of the established families of approaches in uncertainty quantification and consist in computing the coefficients of a polynomial chaos expansion approximation. The first one, the so called pseudo-spectral approach, is non-intrusive and approximates the expansions coefficients via numerical quadrature. The second one, the stochastic Galerkin method, is intrusive and computes these coefficients by solving a system of coupled partial differential equations. While the first one relies on several independent runs of the underlying solver, the latter one requires solver modifications, hence, it is more challenging. Furthermore, in order to validate the results obtained with the two aforementioned methods, we also employ Monte Carlo sampling once. We apply these methods to a computational fluid dynamics code that numerically solves the incompressible Navier Stokes equations for Newtonian fluids via a finite difference scheme. The chosen flow scenario is a two dimensional lid driven cavity an established CFD scenario that is simple and computationally inexpensive. It is therefore well suited for this comparison, and we get an insight about how to use the two methods in practical, more expensive CFD applications. The uncertainty is modeled in the viscosity of the fluid as a continuous random variable with a Gaussian probability distribution. This uncertainty is propagated through the underlying deterministic model, and the outputs of interest are the pressure and the velocity at several locations in the fluid domain. The statistical properties of these outputs i.e. the mean value and the variance are computed, and their probability density functions are estimated following a kernel density approach. Both methodologies are used for these computations, and several metrics are used for a comparison. First, the accuracy of the results and how it depends on the computational cost is analyzed. Then, the runtime of the simulations and their suitability for parallel computing is compared. This comparison shows that the stochastic Galerkin method returns more accurate results than the pseudo spectral approach, but the latter one allows parallel and therefore faster computations. Another aspect for the comparison is the effort needed for the implementation of a method. Due to its non-intrusiveness, the pseudo spectral approach is much more easily implemented. iv

Contents Acknowledgments Abstract iii iv 1 Introduction 1 2 Computational Fluid Dynamics 3 2.1 Description of Fluids............................. 3 2.2 Navier Stokes Equations........................... 4 2.3 Discretization and Numerical Solver.................... 7 2.4 Flow Scenario and Quantities of Interest................. 10 3 Uncertainty Quantification 11 3.1 Probability Theory.............................. 11 3.2 Monte Carlo Sampling............................ 12 3.3 Generalized Polynomial Chaos....................... 13 3.3.1 Pseudo Spectral Approach..................... 14 3.3.2 Stochastic Galerkin.......................... 16 3.3.3 Statistical Evaluation......................... 18 3.4 Probability Density Estimation....................... 19 3.5 The Chaospy Library............................. 20 4 UQ Simulation 23 4.1 Solver Accuracy................................ 23 4.2 Monte Carlo Sampling............................ 24 4.3 Pseudo Spectral Approach.......................... 25 4.4 Stochastic Galerkin.............................. 27 5 Results 34 5.1 Validation................................... 34 5.2 PDF Estimation................................ 36 5.3 Convergence of UQ Simulations...................... 38 5.4 Comparison of Methods........................... 43 v

Contents 6 Conclusion 45 List of Figures 46 List of Tables 47 Bibliography 48 vi

1 Introduction A lot of effort has been invested in finding powerful numerical algorithms whose underlying numerical errors are well understood. They can accurately solve mathematical models for example, ordinary or partial differential equations that describe engineering systems. These algorithms are developed and improved with the goal of getting highly accurate solutions. In the development of computers, Moore s law was valid for a long time [18], and computers are still becoming more and more powerful for example, thanks to advances in high performance computing. This allows us to carry out large numerical simulations and obtain accurate results at a reasonably fast speed. However, this accuracy can only be achieved if all model parameters are known exactly, which is not always the case in many applications they are not available with arbitrary accuracy. If, for example, data are collected in experiments, they are affected by measurement noise. Analyzing the impact of these uncertainties is done in the field of uncertainty quantification (UQ), which has attracted more and more attention in recent years. In this work, we focus on the forward propagation of uncertainty, where the uncertainties are assumed to be in the input parameters and their influence on the results is studied. Thus, UQ helps us to get an insight about how reliable the results of a simulation are. Different methods for these analyses have been developed and are still an open research topic. We compare mainly two of these methods i.e. the pseudo spectral approach and the stochastic Galerkin method both of which are based on polynomial chaos expansions. The main difference between the two is that the first one relies on using the underlying deterministic system as it is, whereas the latter one requires some system modifications. We also perform a simulation based on Monte Carlo sampling for validating the results. These methods are applied in the field of computational fluid dynamics (CFD), where fluid flows are simulated by employing numerical algorithms. It has been one of the main research topics in computational science for decades, and many different algorithms and methods have been developed. CFD applications are therefore well suited for the comparison of UQ methodologies. We choose a relatively simple, two-dimensional, problem setup which is modeled by numerically solving the incompressible Navier Stokes equations, and apply both methods to it. From the results obtained this way, we get information about the advantages and disadvantages of the methodologies, and which of them is better suited under specific circumstances. 1

1 Introduction We can use these insights if we want to simulate a more complicated CFD setup. We see, for example, that the stochastic Galerkin method returns results with a higher accuracy, but we need to spend more effort for its implementation. Some research about the application of UQ methods to similar CFD simulations is already available. The authors of [33] and [17] focus on the use of the pseudo spectral approach for higher dimensional problems, and in [7] it is applied in the field of fluid-structure interaction. The use of the stochastic Galerkin method in combination with the Navier Stokes equations is described in [15, 16] and its implementation in [22]. The rest of this work is organized in three major parts, of which the first one is dedicated to the theoretical background of CFD and UQ. It includes Chapter 2 and 3, each introducing the basic concepts on which the current work is based. The second part consisting of Chapter 4 describes the performed UQ simulations with the focus on their implementation. In the third and last part, the results of the simulations are discussed, and the different UQ methodologies are compared to each other. The conclusions that arise from these results are described in Chapter 6. 2

2 Computational Fluid Dynamics In this chapter, we present the mathematical models that describe the flow of fluids, and how they can be solved. The first two sections include the description of fluids and the derivation of the Navier-Stokes equations. Section 2.3 presents how they are applied to simulate fluid flow on a computer. This is followed by a description of the flow scenario that we chose for the simulations carried out in the current work in Section 2.4. The reader is assumed to be familiar with the basic concepts of fluid dynamics, so we provide only a short overview. A more detailed introduction to Fluid Mechanics can be found in [14], whereas the numerical simulations are described in [11]. 2.1 Description of Fluids Fluids can be either liquids or gases, and one of their most important properties is the viscosity. It can be imagined as the fluid s resistance to flow and is best described by the following example [23]: Assume that water is poured from one cup into another, and the same is done with honey. We observe that the honey s flow velocity is much lower, due to a higher viscosity compared to water. Expressed in technical terms, viscosity describes the inner and outer friction of a fluid. Inner friction is the resistance of neighboring fluid particles to relative movement, and outer friction is observed between fluid particles and a solid surface. Another important property of fluids is the compressibility, and a distinction is made between compressible and incompressible fluids. A compressible fluid is characterized by the fact that the density can change when an external pressure is applied. As opposed to that if the density stays constant the fluid is called incompressible. In many applications, the incompressibility assumption is made when dealing with liquids, where it is a good approximation to reality, and the equations underlying the simulation are simplified; see Chapter 2.2 and [11]. The modeling of fluid flows can be done in two different frameworks the Eulerian and the Lagrangian one. In the Eulerian framework, flow properties and their change over time are observed at certain fixed points in space. The Lagrangian specification of a flow suggests following a fluid particle and observing how its position changes over time. For describing the differences between the two approaches, let us consider 3

2 Computational Fluid Dynamics Figure 2.1: Laminar flow on the left vs. turbulent flow on the right (source: [2]) the flow of a river. Following the Eulerian approach the observer would stand still on the river bank and look at the whole stream. Following the Lagrangian approach, on the other hand, he would observe the river from a boat that drifts down the river [29]. Fluid flows can be either laminar or turbulent. A flow is called laminar if the fluid particles move in regular parallel layers and the particles of adjacent layers do not mix. In other words, fluid particles that are close to each other move in the same direction and have roughly the same speed, which results in a smooth flow; see Figure 2.1 left. In turbulent flows, particles move chaotically in different directions and unsteady vortices appear, creating a rough flow; see Figure 2.1 right. To estimate if a flow is laminar or turbulent, we check the Reynolds number [2], which is defined as the ratio between momentum and viscous forces. This dimensionless quantity is formally introduced in Section 2.2. The Reynolds number of a flow indicates which of the two forces dominates, and based on that knowledge, we know if the flow is laminar or turbulent. At low Reynolds numbers the viscous forces are dominant, and the flow is laminar. This can be due to a high viscosity in the fluid or a low flow velocity. At high Reynolds numbers the momentum forces dominate and the flow becomes turbulent. This happens at low viscosities or high velocities. 2.2 Navier Stokes Equations In this section, we derive the Navier Stokes equations for incompressible, viscous flows, following mainly [11]. This system of partial differential equations uses the conservation of mass and momentum to describe fluid flows. The appearing variables are the velocity field u, the pressure p, and the density ρ. We begin by introducing Reynolds Transport Theorem, which states that for any scalar function f : ( x, t) f ( x, t) the following holds: d f ( x, t) d x = { f + ( f u)}( x, t) d x, (2.1) dt Ω t Ω t t with Ω t being the domain the fluid occupies at the time t [0, t end ], and x being an arbitrary point in Ω t. This theorem can be understood as the relation between the 4

2 Computational Fluid Dynamics Eulerian and the Lagrangian point of view. On the left hand side we consider the change in time of our function in the whole domain like an observer outside of the domain would do. This corresponds to the Eulerian approach, whereas the right hand side corresponds to the Lagrangian approach, where the change in time of our function is split into two parts. An observer in the Lagrangian perspective sees a local change in time and a change due to his movement, represented by the two terms in the equation. We compute the total mass m of a fluid by integrating its density over the occupied domain. As the mass of a system has to be constant, the derivative with respect to time has to be zero. d dt m = d ρ( x, t) d x = 0. (2.2) dt Ω t We apply Reynolds Transport Theorem see Eq. 2.1 and get { ρ + (ρ u)}( x, t) d x = 0. (2.3) Ω t t This has to be true for any Ω t, because the mass is always constant, and we can state that the integrand has to vanish. As we are only considering incompressible fluids, the density is constant and its partial derivatives with respect to space and time are zero. This results in the continuity equation for incompressible fluids: u = 0. (2.4) The second part of the Navier-Stokes equations is the conservation of momentum. The momentum of a fluid is defined as the integral over the product of density and velocity. To this we can apply Newton s second law which states that the change in momentum is equal to the sum of all forces acting on the fluid [20]. We consider body forces acting on the whole domain expressed by a force density g and forces acting on the surface of the domain expressed by the stress tensor σ. The typical body force is gravity, and examples of surface forces are pressure and internal friction. Inserting these force terms into Newton s second law yields d dt ρ( x, t) u( x, t) d x = ρ( x, t) g( x, t) d x + σ( x, t) n ds. (2.5) Ω t Ω t Ω t The term on the left hand side is rewritten with the help of Reynolds Transport Theorem (see Eq. 2.1), and the last term on the right hand side is transformed into a volume integral with the help of the divergence theorem. Again, this has to hold for arbitrary domains, so the integration is skipped. After rearranging the terms, this gives us the following: (ρ u) + ( u )(ρ u) + (ρ u) ( u) ρ g (σ) = 0. (2.6) t 5

2 Computational Fluid Dynamics In the case of Newtonian fluids, the stress tensor σ can be decomposed into two parts. One is associated with the pressure, and the other one describes the viscous properties given by the strain tensor. After carrying out some computations (see [11]) and introducing the viscosity µ, we finally get the momentum equation: t u + ( u ) u + 1 ρ p = µ u + g. (2.7) ρ The final step of our derivation of the Navier Stokes equations is making them dimensionless. We introduce characteristic reference quantities like a reference length L and velocity V, in order to get dimensionless variables like x := x L for a point in space, and t := V L t for the time. If we use similar variables for the velocity and pressure, and insert these new variables into the momentum equation 2.7, we see that we can group all parameters together and get the aforementioned Reynolds number. Using either the dynamic viscosity µ, or the kinematic one ν = µ ρ, it is defined by: Re := ρ V L µ = V L ν. (2.8) With this we can write the Navier Stokes equations in their dimensionless form. Note that and are referring to x. u = 0 (2.9) t u + ( u ) u + p = 1 Re u + g (2.10) In the current work, we use their two dimensional form, where u denotes the velocity in x-direction and v the velocity in y-direction. After rewriting the term ( u ) u with the help of the continuity equation, they read as follows: u t = 1 ( 2 u Re v t = 1 Re x 2 + 2 u y 2 ( 2 ) v x 2 + 2 v y 2 u x + v y = 0 (2.11) ) (u2 ) x (uv) x (uv) y (v2 ) y + g x p x (2.12) + g y p y. (2.13) For the problem statement to be complete, we need appropriate initial and boundary conditions. In the simulations used in the current work, we only apply no-slip conditions, where the fluid at the boundary has to have the same velocity as the surrounding wall. After specifying appropriate initial conditions in our case the velocity is set to zero in the whole domain we have a complete mathematical framework for simulating fluid flows. 6

2 Computational Fluid Dynamics Figure 2.2: left: staggered grid, right: domain with boundary strip of ghost cells (source: [11]) 2.3 Discretization and Numerical Solver For solving the Navier Stokes equations, we use a code that has been developed at the Chair of Scientific Computing at the Technical University of Munich. In this section, we describe the underlying numerical algorithm and how the code interacts with the user. To describe the algorithm we follow [11], more on the implementation of the code can be found in [19]. We restrict ourselves to two-dimensional problems in rectangular domains. The partial derivatives are discretized by employing a finite-difference scheme. In order to avoid numerical instabilities, a staggered rectangular grid is used; see Figure 2.2 left. The domain is decomposed into a finite number of rectangular cells, and in each the pressure p is stored at the midpoint of the cell, the horizontal velocity u at the midpoint of the right edge, and the vertical velocity v at the midpoint of the top edge. While this kind of grid has good numerical properties, it causes problems for imposing the boundary conditions. As the horizontal velocity is only stored on the vertical edges, there is no grid point lying directly on the horizontal boundary. The same applies to the vertical velocities on the vertical boundaries. The solution to this problem is to add a layer of ghost cells around the domain; see Figure 2.2 right. The values for the velocities on the boundary are then set to the averages of the two cells on both sides of the boundary. For example, to set the vertical velocity component on the left boundary to zero, we use 1 2 (v 0,j + v 1,j ) = 0 which is equivalent to v 0,j = v 1,j. The other boundaries, and boundary conditions with values different from zero, are treated in the same way. For the discretization of the spatial derivatives, central differences are used. For example, the terms u x follows: ] [ u x i,j and v y of the continuity equation (2.11) are discretized as := u i,j u i 1,j δx [ ] v y i,j := v i,j v i,j 1. (2.14) δy 7

2 Computational Fluid Dynamics The terms in Eq. 2.12 and 2.13 are discretized in a similar fashion; see [11]. For reasons of numerical stability, the terms (u2 ) x and (uv) y are not only discretized by central differences but also a donor cell scheme, and a weighted average of both is taken. The Navier Stokes equations are only solved at a finite number of points in time in the interval [0, t end ]. For the discretization of the time derivatives, Euler s method is used and first order difference quotients are introduced at each time step. The time step size is denoted by δt and the time level by (n). Applying this discretization to the momentum Equations 2.12 and 2.13, and solving for u (n+1) and v (n+1) yields the following: [ ( u (n+1) = u (n) 1 + 2 ) u δt Re x 2 + 2 u y 2 (u2 ) x (uv) + g x p ] (2.15) y x [ ( v (n+1) = v (n) 1 + 2 ) v δt Re x 2 + 2 v y 2 (uv) (v2 ) x y + g y p ]. (2.16) y This is written more compactly by introducing the variables F and G, containing all terms on the right hand side except for the pressure terms. After assigning time levels to the new right hand side we get: u (n+1) = F (n) δt p (n+1) x v (n+1) = G (n) δt p (n+1). (2.17) y These equations are explicit with respect to the velocities and implicit with respect to the pressure. The velocity terms F (n) and G (n) are computed by applying the aforementioned finite difference operators, and the pressure is computed with the help of the continuity equation. By inserting Eq. 2.17 into Eq. 2.11 we get the Poisson equation for the pressure: 2 (n+1) p x 2 ( + 2 (n+1) p y 2 = 1 δt F x (n) ) + G (n). (2.18) y The functionality used to solve it is provided by the PETSc library 1 ; for more details, we refer the reader to [12]. Here, we focus only on the fact that it computes the pressure at the time level (n + 1) from the velocities at the time level (n). The aforementioned steps are summarized in Algorithm 1. Given appropriate initial- and boundary conditions, it can be used to to compute the fluid s pressure and velocity at every point in time and space. In the next step, we present how the user interacts with the code; for a more detailed description we refer to [19]. The setup is provided by the user in a configuration file, 1 https://www.mcs.anl.gov/petsc/ 8

2 Computational Fluid Dynamics Algorithm 1 Solving the Navier Stokes equations 1: while t < t end do 2: update t according to stability conditions 3: Compute F and G at current time step (n) with finite differences 4: Solve Poisson equation for pressure at next time step (n + 1) 5: Compute velocities at next time step (n + 1) according to Eq. 2.17 6: Apply boundary conditions 7: end while where parameters like the Reynolds number, solver characteristics, mesh sizes, and boundary conditions are defined an example is shown in Figure 2.3. The solver is started directly in the command window by calling its compiled executable./ns and giving it the configuration file as an input. The program does the described computations and saves the results in.vtk files, which are used for visualization purposes; see Section 2.4. Figure 2.3: Configuration file for the cavity scenario: Parameters like the Reynolds number and discretization parameters are defined (See [19]) 9

2 Computational Fluid Dynamics Figure 2.4: Two dimensional lid driven cavity scenario 2.4 Flow Scenario and Quantities of Interest This code offers functionality for several flow scenarios, including the flow through a channel or over a backward facing step. In this work, we chose the lid driven cavity flow scenario, depicted in Figure 2.4, where we visualized the velocity field by using Paraview 2. The setup is a hollow cube with solid walls which is filled with a fluid initially at rest. Then, the top wall of the cube the lid starts to move with a constant velocity and, because of the no-slip boundary condition, the fluid starts to circulate. We model the flow in a two-dimensional cross section of the cube, parallel to the direction of the moving lid. A useful characteristic of this scenario is that we only apply one type of boundary condition, because for one of the UQ methods i.e. stochastic Galerkin we have to modify their application in the code; see Section 4.4. For the UQ simulations, described in Chapter 4, we need to choose some output quantities at which we want to analyze the impact of the introduced uncertainty. They are termed Quantities of Interest (QoI) and are in our case specific pressure and the velocity values. We evaluate both at three different points in the domain. The pressure is changing only in the upper region, and therefore we evaluate it at the points marked as 1, 2, and 3 in Figure 2.4. Their coordinates are (0.1, 0.95), (0.5, 0.95), and (0.9, 0.95) here the first component corresponds to the distance from the left wall and the second component to the height. The velocity profile in x-direction is evaluated at three points of the vertical center line, namely (0.5, 0.95), (0.5, 0.9), and (0.5, 0.5) corresponding to points 2, 4, and 5 in Figure 2.4. 2 http://www.paraview.org/ 10

3 Uncertainty Quantification In this chapter, we present the theoretical background of uncertainty quantification and the methodologies used in the current work. We focus on the forward propagation of uncertainty, i.e. the propagation of uncertain inputs through the underlying deterministic model together with their statistical post-processing. We model uncertain inputs of a system as random variables with a given probability distribution and analyze their impact on the output quantities by not only computing their mean values and variances, but also estimating the corresponding probability density functions (PDF). This way, we get a more complete picture of how the system behaves under the influence of uncertainty. We assume the reader to be familiar with the basic concepts of probability theory and provide only a short introduction in Section 3.1. For a more detailed description, we refer to [6] and [24]. In Sections 3.2 and 3.3, we describe the used UQ methodologies. First, we have a brief look at Monte Carlo Sampling and then describe in a little more detail the pseudo spectral approach and the stochastic Galerkin method. This is followed by a section about the PDF estimation, and then Section 3.5 describes the basic functionalities of Chaospy 1 a library for UQ simulations in Python. 3.1 Probability Theory In this section, we follow [6] for the formal definitions of the basic concepts of probability theory. Uncertainty is modeled by a random variable X that assigns a real value to every possible event in an event space Ω; X : Ω R. In the current work, we use continuous random variables which can take on any value, although the probability for it to be equal to an exact specific value is zero. Note that we only consider one dimensional i.e. scalar random variables, not random vectors. Such a random variable can be uniquely described by its cumulative distribution function (CDF), which is defined as the probability that the random variable is smaller than a specific value x F X (x) = P(X x), (3.1) 1 https://github.com/hplgit/chaospy 11

3 Uncertainty Quantification or its probability density function (PDF), defined as ρ X (x) = df X(x). (3.2) dx If the PDF of a random variable is known, one can compute its expectation or mean value E[X] = x ρ X (x)dx, (3.3) and its variance Var[X] = E[(X E[X]) 2 ] = (x E[X]) 2 ρ X (x)dx. (3.4) In the chapters about the UQ simulations, we use these two quantities for the description of the QoI. For introducing the methods based on polynomial chaos, we need to define an inner product of two polynomials p 1 (x) and p 2 (x) in the probabilistic space; see Section 3.3. Using the Expectation operator, we define it as follows: p 1 (x), p 2 (x) = E[p 1 (x) p 2 (x)] = p 1 (x) p 2 (x) ρ X (x)dx. (3.5) A specific type of probability distribution is the normal or Gaussian distribution, which is defined by its standard deviation σ = Var and mean value µ. Its PDF reads as ( ρ X (x) = 1 σ 2π exp 1 ( ) ) x µ 2. (3.6) 2 σ For the simulations in the current work, all random variables are assumed to be normally distributed. If the distribution has zero mean and unit variance, it is called standard normal distribution. Every normally distributed random variable X can be transformed to a standard normally distributed variable Y by the transformation X = µ X + σ X Y. This transformation proves useful for avoiding numerical instabilities; see Section 3.5. 3.2 Monte Carlo Sampling Monte Carlo Sampling (MCS) is regarded one of the top ten algorithms of the twentieth century [4], and can be applied to a large range of problems, ranging from mechanics [26] to quantum physics or financial engineering. The original description of MCS can be found in [21], a detailed introduction is given in [10]. 12

3 Uncertainty Quantification In the context of UQ, MCS consists of three basic steps. First, a number of samples is generated by drawing from the probability distribution of the random input variable. In the second step, the deterministic system is solved with each of these samples as an input, and the QoI are computed. The third step is their statistical evaluation. This methodology has some useful properties, for example, it is relatively easy to understand and implement on a computer. We also do not make any assumption about our system and do not interfere with it; we treat it as a "black box". This means that once the algorithm is implemented, it can be reused for other problem setups with only slight modifications. The biggest advantage of this method is the fact that the complexity of MCS does not depend on the dimension of the problem. The convergence speed stays the same for multiple dimensions - making MCS in many cases the only possible method for solving high dimensional numerical problems. There is however one major drawback. Because MCS relies on the law of large numbers, it needs compared to other methods many samples for convergence. The convergence rate for the mean value is typically in the order of O( 1 n ) for n solutions of the system. This means that in order to halve the error, the number of samples needs to be approximately quadrupled. This results in a large number of calls of the deterministic system, and problems occur if the considered system is computationally expensive. In the current work, we use MCS only for getting a rough estimate of the results, to which we can compare the results of the alternative methods introduced in the following and validate them. For this purpose, we restrict ourselves to computing the mean values and the variances of the QoI. 3.3 Generalized Polynomial Chaos In the current work, we compare two different methods for the uncertainty assessment, namely the pseudo spectral and the stochastic Galerkin approach. The following sections shortly introduce the one dimensional versions of these methods. For a more detailed explanation or for multi dimensional problems, we refer the reader to [30] and the references therein. Unlike classical sampling methods, they exploit the often exhibited smoothness of the underlying mathematical model, and are well suited to address low dimensional UQ problems, as, typically, a small number of system evaluations is needed to obtain good results. These requirements are met in our setup, and we therefore use them instead of MCS. Both methods are based on the generalized Polynomial Chaos (gpc) theory [31], and use polynomial expansions for representing random variables. For these expansions, a series of N orthogonal polynomials is used: 13

3 Uncertainty Quantification φ i (x), φ j (x) ρ(x) = γ i δ ij for i, j = 0,.., N 1, (3.7) where δ ij represents the well-known Kronecker delta: { 1, if i = j, δ ij = 0, if i = j, and γ i are called normalization constants, computed as (3.8) γ i = φ i, φ i ρ(x) = E[φi 2 ] i = 0,.., N 1. (3.9) These polynomials are orthogonal with respect to the PDF of the random input variable ρ(x). The choice of the type of polynomials depends on this PDF. For normally distributed random variables, the Hermite polynomials are used. This basis was introduced by [28], and many polynomials have been defined for different PDFs some are discussed in [31]. We consider a system that is represented by a smooth function f : (x, t) f (x, t), where x is modeled as a continuous random variable, and t represents all deterministic input variables. In the simulations of the current work, this function can represent the pressure or the velocity of the fluid at a certain point. We approximate the function by a gpc expansion of the form: f (x, t) N 1 c i (t) φ i (x). (3.10) i=0 Given that the underlying polynomial basis is known, in order to obtain the gpc expansion, we need to compute the coefficients c i (t) in Eq. 3.10. The two methods we consider in the current work have a different approach to calculating them. They are introduced in the following sections. 3.3.1 Pseudo Spectral Approach To simplify notation, in what follows we assume that the underlying orthogonal polynomials are normalized, i.e. orthonormal. We derive the concepts of the pseudo spectral approach as described in [13] and start by taking the inner product of both sides of Eq. 3.10 with one of the polynomials φ j (x): N 1 f (x, t), φ j (x) = i=0 c i (t) φ i (x), φ j (x). (3.11) In the term on the right hand side of the above equation, the summation and the inner product are both linear operations and can be interchanged. In the sum over 14

3 Uncertainty Quantification all inner products φ i, φ j, only the term where i = j remains and is equal to one. Because of the orthogonality relation (see Eq. 3.7), all other terms vanish and we obtain the following decoupled system of equations for computing the coefficients: f (x, t), φ j (x) = c j (t) j = 0,.., N 1. (3.12) As a remark, we mention that these computations can also be interpreted as a least squares minimization. Minimizing the squared distance of the function f to its approximation in the probability space yields the exact same result; see [8]. Now we have a system of equations for the coefficients, but because f (x, t) is not available in closed form, the integrals appearing in the inner products have to be computed numerically via a quadrature rule. The optimal choice in this case is employing Gaussian quadrature, because its degree of exactness is 2K + 1, for K nodes and weights [5]. We approximate the inner product in Equation 3.11 by: c j (t) = Γ f (x, t)φ j (x)ρ(x)dx K 1 k=0 f (x k, t) φ j (x k ) w k j = 0,.., N 1 (3.13) with the nodes x k and weights w k. The nodes for a Gaussian quadrature of order K are the roots of the corresponding polynomial φ K and the weights are the Lagrangian interpolation polynomials evaluated at these nodes. In recent years, a number or fast algorithms have been developed to compute these weights and nodes in O(K) operations; see [1]. It is important to note that the nodes at which the function f is evaluated, do not depend on j. Therefore the computed system output at the nodes can be used for computing all coefficients c j. After we compute the coefficients, the entire post-processing depends solely on them, as described in Sections 3.3.3 and 3.4. The described computations are summed up in algorithm 2. Algorithm 2 UQ simulation following pseudo spectral approach Require: probability distribution, orthogonal polynomials Require: number of quadrature nodes K, number of expansion terms N 1: generate quadrature nodes and weights 2: for k = 0 to K-1 do 3: compute f (x k, t) at node x k 4: end for 5: for j = 0 to N-1 do 6: compute c j via Equation 3.13 7: end for 8: Post-processing: e.g. compute mean values and variances 15

3 Uncertainty Quantification 3.3.2 Stochastic Galerkin Following the Stochastic Galerkin approach, we compute the coefficients of the gpc expansion (Eq. 3.10) in a different manner. In this section, we describe the basic steps of the method following mainly [15], [32], and [22]. In a first step the unknown quantities are written as gpc expansions, and these are then inserted into the equations we want to solve. In our case gpc expansions for the pressure and velocity terms are used: u(x, t) = N 1 u i (t)φ i (x) p(x, t) = i=0 N 1 i=0 p i (t)φ i (x) (3.14) and inserted into the incompressible Navier Stokes equations 2.9 and 2.10. To obtain the coefficients u i and p i, we need to solve the following equations note that we dropped the gravity term: N 1 φ i (x) u i N 1 i=0 t + i=0 N 1 j=0 N 1 φ i (x) u i = 0 (3.15) i=0 ( u i ) u j φ i (x)φ j (x) + N 1 i=0 φ i (x) p i = 1 N 1 Re φ i (x) u i. i=0 (3.16) Here, x is a standard normal random variable and φ i (x) are the Hermite polynomials. The uncertain parameter in our simulations described in Chapter 4.4 is the dynamic viscosity ν, which enters into the equations via the Reynolds number Re = VL ν ; cp. Eq. 2.8. If we now assume that the reference velocity V and length L are equal to one as it is the case for our simulations the viscosity can be used to replace the term Re 1 in the above equation. It can be expressed in terms of its mean value ν 0 and standard deviation ν 1 as ν = ν 0 + ν 1 x, by using another polynomial expansion with the first two Hermite polynomials φ 0 (x) 1 and φ 1 (x) = x. The orthogonality of the polynomials is used in a similar fashion as in the pseudo spectral approach. In the Equations 3.15 and 3.16, the scalar product of both sides with one polynomial φ k is taken. After using the orthogonality relation [15], this is the resulting system of differential equations: N 1 u i t + i=0 N 1 j=0 u i = 0 (3.17) N 1 ( u i ) u j C kij + p k = ν 0 u k + ν 1 u i C ki1 (3.18) i=0 16

3 Uncertainty Quantification with the constants C kij given by C kij = φ i(x)φ j (x)φ k (x) φ k (x)φ k (x) ( ) ( i! j! ) ( ) k+i j i+j k j+k i if k + i + j even, = 2! 2! 2! 0 if k + i + j odd. (3.19) In order to bring these equations into a dimensionless form, we have to replace ν 0 by the constant Reynolds number Re 1 = ν 0 VL, introduce the ratio of the the variance and the mean of the viscosity λ = ν 1 ν 0, and replace ν 1 by the expression Re λ. In the two dimensional case, the resulting momentum equations take the following form: N 1 u k t + i=0 N 1 v k t + i=0 N 1 [( (ui u j ) x j=0 N 1 [( (ui v j ) x j=0 + (u ) ] jv i ) C y kij = p k x + 1 Re N 1 i=0 + λ Re + (v ) ] iv j ) C y kij = p k + λ Re ( 2 ) u k x 2 + 2 u k y 2 + [( 2 u i x 2 + 2 u i y 2 y + 1 Re N 1 i=0 ) C ki1 ] ( 2 v k x 2 + 2 v k y 2 [( 2 v i x 2 + 2 v i y 2 ) + (3.20) ) C ki1 ]. (3.21) These coupled equations form a system which has the form of N Navier Stokes equations. We can therefore use a solver which can deal with the original equations, but we have to modify it; see Chapter 4.4. The Algorithm 1 is changed as follows: Algorithm 3 UQ Simulation following stochastic Galerkin method 1: while t < t end do 2: update t according to the stability conditions 3: for k = 0 to N-1 do 4: Compute F and G at current time step with current coefficients u k and v k 5: Solve Poisson equation for pressure coefficients p k at next time step 6: Compute velocity coefficients u k and v k at next time step 7: Apply boundary conditions 8: end for 9: end while After computing all coefficients u k, v k and p k, we continue with the post-processing, which is discussed in the following section. 17

3 Uncertainty Quantification 3.3.3 Statistical Evaluation Once the coefficients of the gpc expansions are available, we can compute the expectation and the variance of our stochastic output f ; see Eq. 3.10. The expectation is given by [25]: E [ f (x, t)] E [ ] [ N 1 N 1 c i (t) φ i (x) = E [c 0 (t)φ 0 (x)] + E i=0 i=1 c i (t) φ i (x) ]. (3.22) Because the first polynomial φ 0 (x) is set to one, the first term is equal to c 0 (t). In the second term the expectation operator can be taken into the sum, and we get a sum over the expectations of all the other Polynomials. These terms all have to be zero because of the orthogonality: With this the expectation becomes: E[φ i ] = E[φ 0 φ i ] = 0. (3.23) E[ f (x, t)] = c 0 (t). (3.24) We get the variance in a similar way. The gpc expansion of f is used, the fist term is taken out of the sum, and it is canceled out: [ ] [ Var[ f (x, t)] = E ( f (x, t) E[ f (x, t)]) 2] N 1 E ( c i (t) φ i (x) c 0 (t)) 2 (3.25) i=0 [ ] N 1 = E ( c i (t) φ i (x)) 2. (3.26) i=1 If we now compute the square of the sum, we see that all the mixed terms vanish because of the orthogonality. We are left with the sum of the squared remaining coefficients. Var[ f (x, t)] = N 1 c i (t) 2 E[φ i (x) 2 ] = i=1 N 1 c i (t) 2 γ i (3.27) i=1 If the polynomials are normalized as we assumed in the pseudo spectral approach we do not have to take the normalization constants γ i into account. We see that once the coefficients are computed, we can immediately get the mean and the variance of our function, which give us a rough understanding about how it behaves if the input is a random variable. This behavior can be even better understood by estimating a PDF of the output. In the next section, we describe how the coefficients and polynomials can be used for these estimates. 18

3 Uncertainty Quantification Figure 3.1: Approaches for PDF estimation: Histogram (a) and KDE (b) (source [25]) 3.4 Probability Density Estimation Whereas the mean and the variance are only point estimates, a PDF gives us information on the entire support of the QoI. After computing the coefficients of the gpc expansions with the described methods, we have polynomial functions to represent them. If we only compute the mean and the variance in the post-processing step, a lot of information remains unused. In this section, we shortly introduce the basic steps for estimating a PDF, following mainly [7]; a more detailed description can be found in [25]. The first step for getting an estimate for a PDF is to sample values of the gpc expansion at different points depending on the input distribution. We call these values x i, and they represent pressure or velocity values in our case. We now have different possibilities to arrive at a PDF. One way is to represent the data as a histogram, which means that we divide the support of our QoI into bins of equal width, and count the number of x i that are in each bin. The hight of the bins is then proportional to that number. To get a PDF we can simply fit a curve to this histogram; see Figure 3.1 (a). The problem with this approach is that the outcome heavily depends on the choice of bins [25]. In the current work, we use a different approach called kernel density estimation (KDE). Following the KDE approach, we compute for every x i a known kernel, and then get the PDF by summing them up; see Figure 3.1 (b). We can choose from a variety of kernels, and use Gaussian ones of the form The PDF ρ(x) is then computed as ( x 2 ) K(x, h) exp 2h 2. (3.28) ρ(x) = 1 N ( ) x xi Nh K h i=1 (3.29) where h is called bandwidth, and it defines how smooth the resulting PDF is. 19

3 Uncertainty Quantification For estimating the PDFs, we use the Python toolbox scikit-learn 2, which provides all the needed functionalities, and follow this algorithm: Algorithm 4 KDE approach to PDF estimation Require: gpc expansion of QoI Require: probability distribution of random input variable Require: number of samples N 1: for i = 0 to N-1 do 2: generate sample ξ i from input distribution 3: evaluate gpc expansion (Eq. 3.10) at ξ i and save result x i 4: end for 5: input all x i into KDE function and generate PDF The implementation of the PDF estimation is described in Chapter 4, and the results are shown and discussed in Chapter 5. 3.5 The Chaospy Library For the UQ simulations in the current work especially those following the pseudo spectral approach we use the Chaospy library, which is an open source Python module. Its main advantage is simplifying the implementation of non-intrusive methods based on polynomial chaos in Python, thus the name Chaospy. As it uses the libraries Numpy and Scipy, it allows fast computations. In the following, we introduce its basic functionalities, a more detailed description can be found in [8, 9]. It is used in a way the following example shows. These commands create a normal distribution with mean 100 and standard deviation 10: import chaospy as cp dist = cp.normal(100.,10.). In Chapter 4.3, the Python code used for the simulations is described. Here, we want to address another issue, and that is the way Chaospy gets the orthogonal polynomials used for the gpc expansions; see Section 3.3. The most commonly known algorithm for computing orthogonal polynomials is the Gram-Schmidt process [3]. However, it is not always numerically stable. Therefore Chaospy uses another stable algorithm called three terms recursion, which is defined as follows: φ n+1 (x) = φ n (x) (x A n ) + φ n 1 (x) B n (3.30) 2 http://scikit-learn.org/stable/ 20

3 Uncertainty Quantification polynomials residual of scalar product 6 5 3.05 10 5 7 6 7.74 10 4 8 7 0.014 9 8 6.25 10 9 576 error of polynomial computed norm 6 2.44 10 4 7 9.84 10 3 8 0.71 9 29 10 2553 Table 3.1: Expectation of product of polynomials generated by Chaospy: E[φ i φ j ] A n = x φ n, φ n φ n, φ n = E[x φ2 n] E[φ 2 n] B n = φ n, φ n φ n 1, φ n 1 = E[φ2 n] E[φ 2 n 1 ] = γ n γ n 1. (3.31) If φ 1 = 0 and φ 0 = 1 are used, this is called discretized Stieltjes procedure [27]. The scalar products are all taken with respect to the probability density ρ(x). in Chaospy, this functionality is provided by the function cp.orth_ttr(n,dist), where n is the number of polynomials and dist is a probability distribution. If we want normalized polynomials, we can use the extra input argument normed = True. Although this algorithm is stable, we can get large numerical errors in certain contexts. This is demonstrated by the following example. Suppose we want 10 normed polynomials which are orthogonal with respect to the normal distribution generated earlier. These are computed via polynomials = cp.orth_ttr(10, dist, normed = True). We can now check if they indeed are orthogonal by simply computing the expectation of the product of two polynomials with commands like norm = cp.e(polynomials[i]*polynomials[j], dist). According to the orthogonality relation the result should be zero within machine precision for i = j and one for i = j; see Eq. 3.7. If we compute these expectations for different i and j, we get the results displayed in Table 3.1. In the left table we see the results of the mixed expectations and in the right one how much the computed norms differ from one. We clearly see that the results are not always close to zero. This means, that if we use eight or more polynomials for the chosen distribution, the results of any simulation will be completely useless. Even if we choose only six or seven polynomials, we already introduce quite a large numerical error, although all we did was generate the supposedly orthogonal polynomials. These results can be explained by the fact that Chaospy stores the coefficients of the polynomials and then uses them to compute the next one. This works fine as long as the order is low, but creates problems for higher order polynomials, because the task of evaluating a polynomial that is given by its coefficients is ill conditioned 21

3 Uncertainty Quantification for a high order. As computing the expectation of a polynomial which is required for computing the next one (see Eq. 3.30 and 3.31) involves these evaluations, we see the problems in our above example. A more detailed discussion of conditioning and numerics in general can, for example, be found in [5]. For us it is enough to note that numerical errors are introduced, if the order of the polynomials or the value of the coefficients gets large. Because of this we have the described problems, if we use many polynomials or as the coefficients include factorials (see Eq. 3.19) distributions with a large mean value. One way of overcoming these problems to a certain extend is to generate the polynomials orthogonal with respect to the standard normal distribution, and using the linear transformation described in Section 3.1 for getting the nodes in the pseudo spectral approach; see Section 3.3.1. This way the error in the norms and the mixed expectations is only in the order of 10 13 for ten polynomials. 22

4 UQ Simulation In this chapter, we describe the implementation of the UQ methods introduced previously. First, we take a brief look at the results of some deterministic runs of the CFD-solver described in Chapter 2.3, in order to get an idea about the numerical errors for different mesh sizes. In Section 4.2, the implementation of the Monte Carlo algorithm is shown, and the implementation of the pseudo spectral approach is the topic of Section 4.3. This is followed by a section about the modifications of the CFD code for the stochastic Galerkin method. The reader should be made aware that we model the viscosity as a random variable with a normal probability distribution with mean 0.01 and standard deviation 0.001 in all UQ simulations. 4.1 Solver Accuracy The first simulations that are carried out in the current work are of deterministic nature, because we want to examine the numerical accuracy of the CFD-solver before going on to the stochastic simulations. We run the code multiple times with different mesh sizes and check the differences in the results in order to obtain this accuracy. As the time step size is computed automatically, changing the spatial step size is sufficient. The code below shows the most important lines from the Python script used to run the simulation for different mesh sizes with 50 to 300 cells in each direction. It writes the computed velocities and pressures at the points defined in the lists vpoints and ppoints into text files. jmax = 26 size = [50 + 10*i for i in range(0,jmax)] vpoints = [[0.5,0.95,0],[0.5,0.9,0],[0.5,0.5,0]] ppoints = [[0.1,0.95,0],[0.5,0.95,0],[0.9,0.95,0]] for j in range(0,jmax): change_grid_in_conf(size[j]) os.system('./ns conf_cavity_2.xml') v,p = get_vtk_data(vpoints,ppoints) write_line_to_files(vel,pr) 23