A Finite-Element based Navier-Stokes Solver for LES

Similar documents
Some remarks on grad-div stabilization of incompressible flow simulations

A dynamic global-coefficient subgrid-scale eddy-viscosity model for large-eddy simulation in complex geometries

An evaluation of a conservative fourth order DNS code in turbulent channel flow

Regularization modeling of turbulent mixing; sweeping the scales

Numerical Methods in Aerodynamics. Turbulence Modeling. Lecture 5: Turbulence modeling

Basic Features of the Fluid Dynamics Simulation Software FrontFlow/Blue

Computers and Mathematics with Applications. Investigation of the LES WALE turbulence model within the lattice Boltzmann framework

Hybrid LES RANS Method Based on an Explicit Algebraic Reynolds Stress Model

Simulating Drag Crisis for a Sphere Using Skin Friction Boundary Conditions

Wall-Functions and Boundary Layer Response to Pulsating and Oscillating Turbulent Channel Flows

Predicting natural transition using large eddy simulation

Anisotropic grid-based formulas. for subgrid-scale models. By G.-H. Cottet 1 AND A. A. Wray

Subgrid-Scale Models for Compressible Large-Eddy Simulations

Engineering. Spring Department of Fluid Mechanics, Budapest University of Technology and Economics. Large-Eddy Simulation in Mechanical

An Introduction to Theories of Turbulence. James Glimm Stony Brook University

Reliability of LES in complex applications

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

Numerical Study of Natural Unsteadiness Using Wall-Distance-Free Turbulence Models

DNS, LES, and wall-modeled LES of separating flow over periodic hills

Database analysis of errors in large-eddy simulation

WALL RESOLUTION STUDY FOR DIRECT NUMERICAL SIMULATION OF TURBULENT CHANNEL FLOW USING A MULTIDOMAIN CHEBYSHEV GRID

Turbulent Boundary Layers & Turbulence Models. Lecture 09

Turbulence models and excitation of solar oscillation modes

A finite-volume algorithm for all speed flows

LARGE EDDY SIMULATION OF MASS TRANSFER ACROSS AN AIR-WATER INTERFACE AT HIGH SCHMIDT NUMBERS

Direct Numerical Simulations of converging-diverging channel flow

LES of turbulent shear flow and pressure driven flow on shallow continental shelves.

COMPARISON OF DIFFERENT SUBGRID TURBULENCE MODELS AND BOUNDARY CONDITIONS FOR LARGE-EDDY-SIMULATIONS OF ROOM AIR FLOWS.

Prospects for High-Speed Flow Simulations

The behaviour of high Reynolds flows in a driven cavity

IMPACT OF LES TURBULENCE SUBGRID MODELS IN THE JET RELEASE SIMULATION

Large eddy simulation of turbulent flow over a backward-facing step: effect of inflow conditions

The hybridized DG methods for WS1, WS2, and CS2 test cases

Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstr. 39, Berlin, Germany,

Validation of an Entropy-Viscosity Model for Large Eddy Simulation

HIGHER-ORDER LINEARLY IMPLICIT ONE-STEP METHODS FOR THREE-DIMENSIONAL INCOMPRESSIBLE NAVIER-STOKES EQUATIONS

Dynamic k-equation Model for Large Eddy Simulation of Compressible Flows. Xiaochuan Chai and Krishnan Mahesh

Computational Fluid Dynamics 2

Compressible Flow LES Using OpenFOAM

A SEAMLESS HYBRID RANS/LES MODEL WITH DYNAMIC REYNOLDS-STRESS CORRECTION FOR HIGH REYNOLDS

The JHU Turbulence Databases (JHTDB)

Turbulence: Basic Physics and Engineering Modeling

Multiscale Computation of Isotropic Homogeneous Turbulent Flow

On the feasibility of merging LES with RANS for the near-wall region of attached turbulent flows

COMMUTATION ERRORS IN PITM SIMULATION

Turbulence - Theory and Modelling GROUP-STUDIES:

Universität des Saarlandes. Fachrichtung 6.1 Mathematik

Generation of initial fields for channel flow investigation

Turbulence Modeling I!

arxiv: v1 [math.na] 23 Jun 2017

Shock Capturing for Discontinuous Galerkin Methods using Finite Volume Sub-cells

Influence of high temperature gradient on turbulence spectra

Discrete Projection Methods for Incompressible Fluid Flow Problems and Application to a Fluid-Structure Interaction

Modelling of turbulent flows: RANS and LES

A DEDICATED LES EXPERIMENTAL DATABASE FOR THE ASSESSMENT OF LES SGS MODELS: THE PULSATILE JET IMPINGEMENT IN TURBULENT CROSS FLOW

LES AND RANS STUDIES OF OSCILLATING FLOWS OVER FLAT PLATE

Lattice-Boltzmann vs. Navier-Stokes simulation of particulate flows

Automatic Eddy Viscosity Assignment for 2-D Hydrodynamic Model of Szczecin Bay

Experience with DNS of particulate flow using a variant of the immersed boundary method

LES modeling of heat and mass transfer in turbulent recirculated flows E. Baake 1, B. Nacke 1, A. Umbrashko 2, A. Jakovics 2

NumAn2014 Conference Proceedings

Direct Numerical Simulation of fractal-generated turbulence

Turbulence modelling. Sørensen, Niels N. Publication date: Link back to DTU Orbit

A Low Reynolds Number Variant of Partially-Averaged Navier-Stokes Model for Turbulence

A NOVEL VLES MODEL FOR TURBULENT FLOW SIMULATIONS

A high-order discontinuous Galerkin solver for 3D aerodynamic turbulent flows

Masters in Mechanical Engineering Aerodynamics 1 st Semester 2015/16

Fluid-Structure Interaction Problems using SU2 and External Finite-Element Solvers

Open boundary conditions in numerical simulations of unsteady incompressible flow

On the interaction between dynamic model dissipation and numerical dissipation due to streamline upwind/petrov Galerkin stabilization

LARGE EDDY SIMULATION AND FLOW CONTROL OVER A 25 RAMP MODEL

Probability density function (PDF) methods 1,2 belong to the broader family of statistical approaches

Numerical modelling of phase change processes in clouds. Challenges and Approaches. Martin Reitzle Bernard Weigand

APPLICATION OF HYBRID CFD/CAA TECHNIQUE FOR MODELING PRESSURE FLUCTUATIONS IN TRANSONIC FLOWS

In Proc. of the V European Conf. on Computational Fluid Dynamics (ECFD), Preprint

COMPRESSIBLE TURBULENT CHANNEL AND PIPE FLOW: SIMILARITIES AND DIFFERENCES

Length Learning Objectives Learning Objectives Assessment

Explicit algebraic Reynolds stress models for boundary layer flows

2. FLUID-FLOW EQUATIONS SPRING 2019

43rd AIAA Aerospace Sciences Meeting and Exhibit, Jan 2005, Reno, Nevada

J. Liou Tulsa Research Center Amoco Production Company Tulsa, OK 74102, USA. Received 23 August 1990 Revised manuscript received 24 October 1990

Subgrid models for large-eddy simulation using unstructured grids in a stabilized finite element framework

Due Tuesday, November 23 nd, 12:00 midnight

LES Study of Shock Wave and Turbulent Boundary Layer Interaction

CORBIS: Code Raréfié Bidimensionnel Implicite Stationnaire

Colloquium FLUID DYNAMICS 2012 Institute of Thermomechanics AS CR, v.v.i., Prague, October 24-26, 2012 p.

model and its application to channel ow By K. B. Shah AND J. H. Ferziger

A STUDY OF MULTIGRID SMOOTHERS USED IN COMPRESSIBLE CFD BASED ON THE CONVECTION DIFFUSION EQUATION

On finite element methods for 3D time dependent convection diffusion reaction equations with small diffusion

The mean shear stress has both viscous and turbulent parts. In simple shear (i.e. U / y the only non-zero mean gradient):

Partitioned Methods for Multifield Problems

Analysis of Mixing Chambers for the Processing of Two-Component Adhesives for Transport Applications

Final abstract for ONERA Taylor-Green DG participation

Turbulence and its modelling. Outline. Department of Fluid Mechanics, Budapest University of Technology and Economics.

Introduction to Turbulence and Turbulence Modeling

Wall-modeled large eddy simulation in complex geometries with application to high-lift devices

Exercise 5: Exact Solutions to the Navier-Stokes Equations I

Zonal hybrid RANS-LES modeling using a Low-Reynolds-Number k ω approach

STEADY AND UNSTEADY 2D NUMERICAL SOLUTION OF GENERALIZED NEWTONIAN FLUIDS FLOW. Radka Keslerová, Karel Kozel

DYNAMIC SUBGRID-SCALE MODELING FOR LARGE-EDDY SIMULATION OF TURBULENT FLOWS WITH A STABILIZED FINITE ELEMENT METHOD

Transcription:

A Finite-Element based Navier-Stokes Solver for LES W. Wienken a, J. Stiller b and U. Fladrich c. a Technische Universität Dresden, Institute of Fluid Mechanics (ISM) b Technische Universität Dresden, Institute for Aerospace Engineering (ILR) c Technische Universität Dresden, Center for High Performance Computing (ZHR), D-0062 Dresden, Germany A new Navier-Stokes solver for laminar and turbulent flow is presented. Special focus is laid on large-eddy simulation of turbulent flows in complex geometries. For discretisation, a streamline-upwind/petrov-galerkin (SUPG) finite element method is employed on an unstructured grid of tetrahedral cells. Temporal integration is carried out with an explicit Runge-Kutta scheme. To reduce computational time, parallelisation based on grid partitioning is used. The new solver is validated for various laminar and turbulent flows, including turbulent channel flow and the flow around a square cylinder. The computations agree well with results from experiments and direct numerical simulations. Furthermore, due to extensive optimisation, the solver exhibits excellent scalability even on a large number of processors. Keywords: large-eddy simulation, finite element method, parallelisation. Introduction The importance of turbulent flows led to a continous effort on more accurate turbulence models and cost-effective numerical algorithms. Additionally, the steady increase in power and the decreasing price of computers allowed to consider these rather expensive models for engineering purposes. An attractive approach is the large-eddy simulation (LES) which computes large scales of turbulence directly while modelling unresolved small structures. Using LES for flows of engineering interest demands the development of numerical algorithms, which are able to deal with complex geometries without losing too much efficiency, compared to specialised methods. A promising approach is the application of unstructured meshes. The higher numerical complexity is justified perspectivly by the potential of adaptive grid refinement techniques. Furthermore, dynamic grid partitioning is regarded as an appropriate technique for speeding up calculations. In the present work, an enhanced SUPG finite element method is adopted for computing compressible turbulent flows within the LES framework.

2 2. Basic Equations The LES approach makes use of the finding that turbulent flows are dominated by large structures, which are long-living, high-energetic, non-isotropic and strongly depend on the geometry, initial and boundary conditions. In contrast, small scales are short-lived, lowenergetic, universal, isotropic and in statistical average dissipative, and therefore, easier to model. For LES the large are separated from the small scales by an filtering operation (see e.g. Martin et al. []). The filtered Navier-Stokes equations read t Ū + i Fi = i Di where Ū are the resolved conservative variables, and F i = F i + F sgs i, Di = D i + D sgs i respectively represent the filtered advective and diffusive fluxes, which are both splitted in a resolved and a subgrid-scale (SGS) contribution. In general, the resolved part of any function f(u) is defined as f = f(ū). In particular, the resolved variables and fluxes can be written as ρ ρ ũ Ū = ρ ũ 2 ρ ũ 3, Fi = ũ i Ū + p, Di = ρ Ẽ 0 δ i δ i2 δ i3 ũ i 0 τ i τ i2 τ i3 ũ j τ ij q i Ẽ = c v T + 2 u2, τ ij = 2η ( ǫ ij δ 3 ij ǫ kk ), ǫ ij = ( 2 jũ i + i ũ j ), q i = λ i T where ρ is the density, ũ the velocity, p the pressure, T the temperature, Ẽ the specific total energy, τ the viscous stress tensor, and q the heat flux. The SGS contribution to the advective fluxes can be summarized as with F sgs i [ 0 τ sgs i τ sgs i2 τ sgs i3 ũ j τ sgs ij + q sgs i τ sgs ij = ρu i u j ρũ i ũ j, q sgs i = ρu i h ρũ i h The SGS stresses τ sgs ij are computed using the Smagorinsky model with van Driest damping [3,4], while the Reynolds analogy is adopted for the SGS heat flux q sgs i. The subgrid contributions emerging from diffusive fluxes can be neglected according to [2,]. 3. Numerics For spatial discretisation a SUPG finite element method [5 7] with linear shape functions is applied. The resulting integral formulation reads [W t Ū i W ( F i D i )]dv W ( F i D i ) n i da Ω Γ ] T N e + W A T ( t Ū + i Fi i Di ) dv = 0. Ω e e=

3 The first two integrals constitute the usual Galerkin formulation while the last expression represents the SUPG operator. W is a piecewise linear weight function and T the stabilisation matrix which depends on the local element size x and the squares of the advective flux Jacobians A i [5]. The spatial discretisation results in a time-dependent system of ordinary differential equations that is integrated using an explicit 4-stage Runge-Kutta scheme along with a damped Jacobi iteration for resolving the consistent mass matrix. We remark, that the finite element formulation accommodates three different methods: Dropping the stabilisation results in the second order accurate but unstable Galerkin FEM. Omitting the time derivative and diffusive fluxes in the SUPG operator gives the first order streamline diffusion (SD) method. Otherwise we get the full SUPG FEM representing a second order upwind scheme. Here only the latter is used. A detailed comparison of these methods is subject of a forthcoming paper. 4. Implementation The numerical model was implemented on top of the grid library MG (Multilevel Grids) [8]. MG provides a light-weight interface for parallel adaptive finite element solvers on tetrahedral grids. The kernel of MG has been shown to scale up to several hundred processors. Interprocessor communication is based on MPI. The grid adaptation starts with a single coarse grid an results in a dynamically distributed multilevel grid. Optionally, only the relevant grid portions are stored on each grid level. For illustration, Fig. depicts the local grid refinement and partitioning of a simple configuration. The current Navier-Stokes solver (MG-NS) can be run in adaptive mode but does only use the finest grid level. Besides LES various statistical models are available for simulation of turbulent flows. Additionally, interfaces for other transport equations are set up. (a) Level : initial grid (b) Level 2 (c) Level 3 Figure. Adapted and distributed multilevel grid.

4 Figure 2. Picture of a parallel run on 6 CPUs. 5. Results 5.. Parallel Performance Analysis Throughout code development, extensive performance analysis and optimisation were part of the implementation efforts. The analysis tool Vampir [] was intensively used for this purpose. Fig. 2 shows the Vampir visualisation of a run on 6 CPUs of an SGI Origin 3800 system. The colours identify different stages of the program execution, e.g. red indicates calls to communication subroutines and other colours represent different states of calculation. Messages sent between processors are shown as black lines. In addition to the global view, Fig. 3 shows the behaviour of the program on one of the 6 processors. The different stages of a time-step (e.g. the Runge-Kutta sub-steps) can clearly be identified. As both pictures suggest, the communication constitutes a relatively small amount of the execution time compared to the actual calculations. This proposition is supported by the determination of the parallel efficiency E n = T n T n (T n denotes the execution time on n processors). The tests were conducted at the TU Dresden on a SGI Origin 3800 system with 28 processors, 20 of which can be employed for user applications. The turbulent channel flow with a constant global mesh size served as a test problem. Figure 4(a) depicts the trend of the efficiency as dependent on the number of processors used. The measured efficiency is above 96% for all runs. The numbers higher than.0 (tests with 6 to 64 processors) are caused by cache effects as the local problem size becomes smaller. In consistence with the almost constant parallel efficiency, the speed-up S n = T T n is almost

5 Time Step Time Step 2 Runge-Kutta Step Runge-Kutta Step 2 Runge-Kutta Step 3 Runge-Kutta Step 4 Communication Compute Local Residual S N V Iteration Iteration 2 Iteration 3 Iteration 4 Solve Mass Matrix Figure 3. Time step analysis. 20 Parallel Efficiency 0.96 0.8 0.6 0.4 0.2 Physical problem: Type: Turbulent channel flow Grid: 202x50x64 (646,400 cells) Reynolds number: 3300 CFL number: 0.5 Computational Environment: Architecture: SGI Origin3800 Processor number: -20 (of 28) Memory usage: ~ GB Speedup 64 32 6 8 4 2 Physical problem: Type: Turbulent channel flow Grid: 202x50x64 (646,400 cells) Reynolds number: 3300 CFL number: 0.5 Computational Environment: Architecture: SGI Origin3800 Processor number: -20 (of 28) Memory usage: ~ GB 2 4 8 6 32 64 20 Number of Processors 2 4 8 6 32 64 20 Number of Processors (a) Parallel efficiency (b) Speed-up Figure 4. Scalability tests on the SGI Origin 3800.

6 linear. This can be examined in Fig 4(b). Earlier tests on a larger number of processors (up to 52) with an application different to MG-NS but also based on MG assure us that the efficiency is not expected to decrease significantly shortly beyond 20 processors. 5.2. Transsonic Flow past a Sphere The supersonic flow past a sphere was chosen to test the implemented dynamic grid adaptation and the stabilisation. Figure 5 shows a comparison between a schlieren shadograph by A.C. Charters (from Van Dyke [9]) at a computation using an adaptive grid consisting of ca. 6 05 nodes. Though the Reynolds number was considerably smaller in the computation (Red = 2000), the excellent qualitative agreement is evident. (a) Experiment [9] (b) Computation (Mach number colouring). Figure 5. Flow past a sphere at a Mach number of.53 5.3. Turbulent Channel Flow The validation of the code for LES included turbulent flows such as homogeneous isotropic turbulence, turbulent channel flow and the flow around a square cylinder. Here, we report only some results obtained for the turbulent flow. Additional results are presented in a forthcoming paper. Figure 6(a) depicts the configuration used for the LES. The flow is driven by a constant average pressure gradient in x direction. For the homogeneous x and z directions periodic boundary conditions are assumed. The chosen Reynolds number (Reτ = uτ δ/ν = 80, where uτ is the wall shear velocity) corresponds to the direct numerical simulations carried out by Kim et al. [0]. For the LES a regular tetrahedral grid consisting of ca. 3.65 05 nodes was used. The computed average velocity (Fig. 6(b)) is in good qualitative agreement with the DNS data. However, the maximum velocity is overestimated which

7 can be attributed to the fact that the grid was to coarse for resolving the wall layer accurately. The velocity fluctuations (Fig. 6(c)) agree well with the DNS. Finally, the resolved Reynolds stress is compared to asymptotic theory for high Reynolds numbers in Fig. 6(d). Lz 25 20 2δ z y x <f x> u(y) <u + > [ - ] 5 0 5 Lx 0 0 0 0 2 0 3 y + [ - ] (a) Configuration (b) Average velocity. : LES; : DNS [0] <u i u i >/ u 2 τ [ - ] 0 9 8 7 6 5 4 3 2 0 0 30 60 90 20 50 80 y + [ - ] <u v > / u 2 τ [ - ] 0.5 0-0.5 - - -0.5 0 0.5 - y/δ [ - ] (c) Resolved velocity fluctuations. LES (comp., 2, 3), solid: DNS [0],, : (d) Resolved Reynolds stress. dash-dot: asymptotic theory) Solid: LES, Figure 6. Set-up and results for turbulent channel flow 6. Conclusion and Outlook The newly developed solver proved to be a valuable tool for computing laminar and turbulent flows in complex geometries. The qualitative and quantitative agreement with experiments and DNS data is good. Due to sustained optimisation, the parallel efficiency is above 96% even for large processors numbers.

8 So far, only a part of MG s features are used for LES-computations. In particular, the capabilty for dynamic grid adaption appears very attractive despite of considerable theroretical problems concerning subgrid-scale modeling. REFERENCES. Martín, M. P., U. Piomelli and G. V. Candler, Subgrid-Scale Models for Compressible Large-Eddy Simulations, Theoretical and Computational Fluid Dynamics, 3:36 376, 2000. 2. Vreman B., B. Geurts and H. Kuerten, Subgrid-modelling in LES of compressible flow, Applied Scientific Research, 54:9 203, 995. 3. Smagorinsky, J., General Circulation Experiments with the Primitive Equations. Monthly Weather Review, 9(3):99 52, 963. 4. Yoshizawa A., Statistical theory for compressible turbulent shear flows, with application to subgrid modeling, Physics of Fluids A, 29:252 264, 986. 5. Shakib, F., T. J. R. Hughes and Z. Johan, A new Finite Element Formulation for Computational Fluid Dynamics: X. The Compressible Euler and Navier-Stokes Equations, Computer Methods in Applied Mechanics and Engineering, 89:4 29, 99. 6. Hauke, G. and T. J. R. Hughes, A unified approach to compressible and incompressible flows, Computer Methods in Applied Mechanics and Engineering, 3:389 395, 994. 7. Jansen, K. E., S. S. Collis, C. Whiting and F. Shakib, A better consistency for loworder stabilized finite element methods, Computer Methods in Applied Mechanics and Engineering, 74:53 70, 999. 8. Stiller, J. and W. E. Nagel, MG A Toolbox for Parallel Grid Adaption and Implementing Unstructured Multigrid Solvers, Proc. Parallel Computing 999, Delft, August 7 20 999. To be published by Imperial College Press. 9. Van Dyke, M., An Album of Fluid Motion, Parabolic Press, Stanford, 982. 0. Kim, J., P. Moin and R. Moser, Turbulence Statistisc in Fully Developed Channel Flow at Low Reynolds Number, Journal of Fluid Mechanics, 77:33 66, 987.. Brunst, H., H.-Ch. Hoppe, W. E. Nagel, and M. Winkler, Performance Optimization for Large Scale Computing: The Scalable Vampir Approach, Proc. ICCS200, San Francisco, USA, May 28.- 30., 200, Springer-Verlag Berlin Heidelberg New York, Lecture Notes in Computer Science. Volume 2074, pp 075ff.