Parallel Simulations of Self-propelled Microorganisms

Size: px
Start display at page:

Download "Parallel Simulations of Self-propelled Microorganisms"

Transcription

1 Parallel Simulations of Self-propelled Microorganisms K. Pickl a,b M. Hofmann c T. Preclik a H. Köstler a A.-S. Smith b,d U. Rüde a,b ParCo 2013, Munich a Lehrstuhl für Informatik 10 (Systemsimulation), FAU Erlangen-Nürnberg b Cluster of Excellence: Engineering of Advanced Materials, FAU Erlangen-Nürnberg c Fakultät für Mathematik, Lehrstuhl für Numerische Mathematik, TU München d Institut für Theoretische Physik I, FAU Erlangen-Nürnberg

2 Flow Regimes Re all images taken from ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 2

3 Flow at Low Reynolds Number: Purcell s Scallop Theorem t Stokes flow t 2 t 1 x 1 x 2 x domination of viscous forces small momentum always laminar time reversible no coasting we need asymmetric, non-time reversible motion to achieve any net movement E.M. Purcell. Life at low Reynolds number. American Journal of Physics 45: 3-11 (1977) ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 3

4 Physical Model of a Swimmer we choose the simplest possible design: Golestanian s* swimmer connections between the objects: spring-damper systems used in previous studies A. Najafi and R. Golestanian. Simple swimmer at low Reynolds number: Three linked spheres. Phys. Rev. E, 69(6): (2004) K. Pickl et al. All good things come in threes three beads learn to swim with lattice Boltzmann and a rigid body solver. JoCS 3(5): (2012) ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 4

5 Physical Model of a Swimmer we choose the simplest possible design: Golestanian s* swimmer connections between the objects: spring-damper systems used in previous studies overlapping hydrodynamic interactions prevent bending (preserve axis of 180 ) introduce angular springs A. Najafi and R. Golestanian. Simple swimmer at low Reynolds number: Three linked spheres. Phys. Rev. E, 69(6): (2004) K. Pickl et al. All good things come in threes three beads learn to swim with lattice Boltzmann and a rigid body solver. JoCS 3(5): (2012) M. Hofmann. Parallelisation of Swimmer Models for Swarms of Bacteria in the Physics Engine pe. Master s thesis, LSS, FAU Erlangen-Nürnberg (2013) ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 4

6 Non-time Reversible Cycling Strategy Force (x-component) Force on body 2 Force on body 1 Force on body Time step total applied force vanishes over one cycle (displacement of swimmer over one cycle is zero in absence of fluid) applied along specified main axis of swimmer on center of mass of each body (in this case: x-direction) net driving force acting on system at each instant of time is zero ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 5

7 Software Fluid Simulation WALBERLA (widely applicable Lattice Boltzmann solver from Erlangen) suited for various flow applications different fluid models (SRT, TRT, MRT) suitable for homo- and heterogeneous architectures large-scale, MPI-based parallelization Rigid Body Simulation pe based on Newton s mechanics fully resolved objects (sphere, box,... ) connections between objects can be soft or hard constraints accurate handling of friction during collision large-scale, MPI-based parallelization I. Ginzburg et al. Two-relaxation-time lattice Boltzmann scheme: About parametrization,.... Comm. in Computational Physics, 3(2): , (2008) P. A. Cundall and O. D. L. Strack. A discrete numerical model for granular assemblies. Geotechnique, 29:47 65, (1979) ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 6

8 Coupling both Frameworks: Four-Way Coupling 1. Object Mapping 2. LBM Communication 3. Stream Collide 4. Hydrodynamic Forces 5. Lubrication Correction 6. Physics Engine ParCo 2013, Munich FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 7

9 So Far: Sequential Computations ParCo 2013, Munich FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 8

10 So Far: Sequential Computations Get Ready for Parallel Simulations of Many Swimmers ParCo 2013, Munich FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 8

11 So Far: Sequential Computations Get Ready for Parallel ParCo 2013, Munich FAU Erlangen-Nürnberg Simulations of Many Swimmers introduction of angular springs to prevent bending Parallel Simulations of Self-propelled Microorganisms 8

12 So Far: Sequential Computations Get Ready for Parallel ParCo 2013, Munich FAU Erlangen-Nürnberg Simulations of Many Swimmers introduction of angular springs to prevent bending handling of pair-wise spring-like interactions, extending not only over neighboring but also over multiple process domains job of the pe Parallel Simulations of Self-propelled Microorganisms 8

13 Parallel Discrete Element Method (DEM) First MPI communication: Send and receive forces and torques 1: Find and resolve all contacts inside each local domain 2: // First MPI communication 3: for all remote objects b rem do 4: sendforcesandtorquestoowner(b rem ) 5: end for 6: Receive forces and torques on local objects and perform a time-integration ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 9

14 Parallel Discrete Element Method (DEM) Second MPI communication: Update remote objects and migrate objects 7: for all local objects b loc do 8: for all neighboring processes p loc do 9: if b loc and p loc intersect and there is no remote object of b loc on p loc then 10: send b loc to p loc 11: end if 12: end for 13: for all processes p s holding remote objects of b loc do 14: send update of b loc to p s 15: end for 16: if b loc s center of mass has moved to neighboring process p n then 17: migrate b loc to p n 18: mark springs attached to b loc to be moved to p n 19: end if 20: end for 21: Receive updates and new objects M. Hofmann. Parallelisation of Swimmer Models for Swarms of Bacteria in the Physics Engine pe. Master s thesis, LSS, FAU Erlangen-Nürnberg (2013) ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 10

15 Parallel Discrete Element Method (DEM) New! Third MPI Communication: Send springs and attached objects M. Hofmann. Parallelisation of Swimmer Models for Swarms of Bacteria in the Physics Engine pe. Master s thesis, LSS, FAU Erlangen-Nürnberg (2013) ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 11

16 Parallel Discrete Element Method (DEM) New! Third MPI Communication: Send springs and attached objects 22: for all springs s send marked to be sent do 23: for all objects b att attached to s send do 24: send remote object of b att to the stored process p n 25: end for 26: send spring s send to the stored process p n 27: end for 28: Receive remote objects and instantiate a distant process, if necessary 29: Receive springs and attach them 30: Delete remote objects, springs, and distant processes no longer needed keep communication partners updated all information regarding spring-like pair-wise interactions is sent for long-range interactions: only associated processes communicate M. Hofmann. Parallelisation of Swimmer Models for Swarms of Bacteria in the Physics Engine pe. Master s thesis, LSS, FAU Erlangen-Nürnberg (2013) ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 12

17 So Far: Sequential Computations Now We Are Ready for ParCo 2013, Munich FAU Erlangen-Nürnberg Parallel Simulations of Many Swimmers introduction of angular springs to prevent bending handling of pair-wise spring-like interactions, extending not only over neighboring but also over multiple process domains Parallel Simulations of Self-propelled Microorganisms 13

18 Weak Scaling Setup Does the newly introduced communication influence the scaling behavior? 1003 lattice cells and 2x8x8 swimmers/core rsphere = 4 lattice cells, dc.o.m. = 16 lattice cells 4,000 time steps smallest scenario: 4x4x4 cores (8,192 swimmers) successively doubling the cores in Cartesian directions entire domain: periodic in all directions ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 14

19 System Configurations of the Used Supercomputers SUPERMUC JUQUEEN # Cores 147, ,752 # Nodes 9,216 28,672 # Processors per Node 2 1 # Cores per Processor 8 16 Peak Performance [PFlop/s] Memory per Core [GByte] 2 1 Processor Type Sandy Bridge-EP Intel Xeon E C IBM PowerPC A2 Clock Speed 2.7 GHz 1.6 GHz Interconnect Infiniband FDR10 IBM specific Interconnect Type Intra-Island Topology: non-blocking Tree 5D Torus Inter-Island Topology: Pruned Tree 4:1 ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 15

20 Weak Scaling Results on SUPERMUC Time to solution [s] Efficiency [%] Efficiency Physics Engine LBM Communication Stream Collide Hydrodynamic Forces Object Mapping Number of nodes using Intel C++ compiler version 12.1, IBM MPI implementation, and a clock speed of 2.7 GHz not displayed: Setup, Swimmer Setup and Lubrication Correction individual fractions measured using average over all cores ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 16

21 Weak Scaling Setup on JUQUEEN cores are able to perform four-way multithreading analyze our smallest setup: 4x4x4 cores (ˆ= 4 nodes) # Threads MLUP/s ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 17

22 Weak Scaling Setup on JUQUEEN cores are able to perform four-way multithreading analyze our smallest setup: 4x4x4 cores (ˆ= 4 nodes) # Threads MLUP/s ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 17

23 Weak Scaling Results on JUQUEEN Time to solution [s] Efficiency [%] Efficiency Physics Engine LBM Communication Stream Collide Hydrodynamic Forces Object Mapping Number of nodes Largest simulated setup on 8,192 nodes: 16,777,216 swimmers using GNU C++ compiler version not displayed: Setup, Swimmer Setup and Lubrication Correction individual fractions measured using average over all cores ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 18

24 Conclusions and Future Work Conclusions successful integration of handling pair-wise interactions extending over process domains weak scaling on two supercomputers currently ranked in top ten of TOP500 list demonstrate scalability on up to 262,144 processes Future Work analyze collective behavior of swimmers systematically reaching a steady state requires longer time steps improvement of parallel I/O and associated data analysis strong scaling characteristics ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 19

25 Thank you for your attention! Extract from the References K. Pickl et al. All good things come in threes three beads learn to swim with lattice Boltzmann and a rigid body solver. Journal of Computational Science, 3(5): , M. Hofmann. Parallelisation of Swimmer Models for Swarms of Bacteria in the Physics Engine pe. Master s thesis, Lehrstuhl für Informatik 10 (Systemsimulation), FAU Erlangen-Nürnberg, C. Feichtinger et al. WaLBerla: HPC software design for computational engineering simulations. Journal of Computational Science, 2(2): , A. Najafi and R. Golestanian. Simple swimmer at low Reynolds number: Three linked spheres. Phys. Rev. E, 69(6):062901, C. M. Pooley et al. Hydrodynamic interaction between two swimmers at low Reynolds number. Phys. Rev. Lett., 99:228103, Acknowledgments ParCo 2013, Munich kristina.pickl@fau.de FAU Erlangen-Nürnberg Parallel Simulations of Self-propelled Microorganisms 20

A Framework for Hybrid Parallel Flow Simulations with a Trillion Cells in Complex Geometries

A Framework for Hybrid Parallel Flow Simulations with a Trillion Cells in Complex Geometries A Framework for Hybrid Parallel Flow Simulations with a Trillion Cells in Complex Geometries SC13, November 21 st 2013 Christian Godenschwager, Florian Schornbaum, Martin Bauer, Harald Köstler, Ulrich

More information

Lattice Boltzmann simulations on heterogeneous CPU-GPU clusters

Lattice Boltzmann simulations on heterogeneous CPU-GPU clusters Lattice Boltzmann simulations on heterogeneous CPU-GPU clusters H. Köstler 2nd International Symposium Computer Simulations on GPU Freudenstadt, 29.05.2013 1 Contents Motivation walberla software concepts

More information

Some thoughts about energy efficient application execution on NEC LX Series compute clusters

Some thoughts about energy efficient application execution on NEC LX Series compute clusters Some thoughts about energy efficient application execution on NEC LX Series compute clusters G. Wellein, G. Hager, J. Treibig, M. Wittmann Erlangen Regional Computing Center & Department of Computer Science

More information

Simulation of floating bodies with lattice Boltzmann

Simulation of floating bodies with lattice Boltzmann Simulation of floating bodies with lattice Boltzmann by Simon Bogner, 17.11.2011, Lehrstuhl für Systemsimulation, Friedrich-Alexander Universität Erlangen 1 Simulation of floating bodies with lattice Boltzmann

More information

Applications of Lattice Boltzmann Methods

Applications of Lattice Boltzmann Methods Applications of Lattice Boltzmann Methods Dominik Bartuschat, Martin Bauer, Simon Bogner, Christian Godenschwager, Florian Schornbaum, Ulrich Rüde Erlangen, Germany March 1, 2016 NUMET 2016 D.Bartuschat,

More information

Drag Force Simulations of Particle Agglomerates with the Lattice-Boltzmann Method

Drag Force Simulations of Particle Agglomerates with the Lattice-Boltzmann Method Drag Force Simulations of Particle Agglomerates with the Lattice-Boltzmann Method Christian Feichtinger, Nils Thuerey, Ulrich Ruede Christian Binder, Hans-Joachim Schmid, Wolfgang Peukert Friedrich-Alexander-Universität

More information

Massively parallel semi-lagrangian solution of the 6d Vlasov-Poisson problem

Massively parallel semi-lagrangian solution of the 6d Vlasov-Poisson problem Massively parallel semi-lagrangian solution of the 6d Vlasov-Poisson problem Katharina Kormann 1 Klaus Reuter 2 Markus Rampp 2 Eric Sonnendrücker 1 1 Max Planck Institut für Plasmaphysik 2 Max Planck Computing

More information

Weather Research and Forecasting (WRF) Performance Benchmark and Profiling. July 2012

Weather Research and Forecasting (WRF) Performance Benchmark and Profiling. July 2012 Weather Research and Forecasting (WRF) Performance Benchmark and Profiling July 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell,

More information

Static-scheduling and hybrid-programming in SuperLU DIST on multicore cluster systems

Static-scheduling and hybrid-programming in SuperLU DIST on multicore cluster systems Static-scheduling and hybrid-programming in SuperLU DIST on multicore cluster systems Ichitaro Yamazaki University of Tennessee, Knoxville Xiaoye Sherry Li Lawrence Berkeley National Laboratory MS49: Sparse

More information

Numerical Simulation Of Pore Fluid Flow And Fine Sediment Infiltration Into The Riverbed

Numerical Simulation Of Pore Fluid Flow And Fine Sediment Infiltration Into The Riverbed City University of New York (CUNY) CUNY Academic Works International Conference on Hydroinformatics 8-1-2014 Numerical Simulation Of Pore Fluid Flow And Fine Sediment Infiltration Into The Riverbed Tobias

More information

COMPARISON OF CPU AND GPU IMPLEMENTATIONS OF THE LATTICE BOLTZMANN METHOD

COMPARISON OF CPU AND GPU IMPLEMENTATIONS OF THE LATTICE BOLTZMANN METHOD XVIII International Conference on Water Resources CMWR 2010 J. Carrera (Ed) c CIMNE, Barcelona, 2010 COMPARISON OF CPU AND GPU IMPLEMENTATIONS OF THE LATTICE BOLTZMANN METHOD James.E. McClure, Jan F. Prins

More information

Pore-scale lattice Boltzmann simulation of laminar and turbulent flow through a sphere pack

Pore-scale lattice Boltzmann simulation of laminar and turbulent flow through a sphere pack Pore-scale lattice Boltzmann simulation of laminar and turbulent flow through a sphere pack Ehsan Fattahi a,b, Christian Waluga, Barbara Wohlmuth a, Ulrich Rüde b, Michael Manhart c, Rainer Helmig d a

More information

On the Use of a Many core Processor for Computational Fluid Dynamics Simulations

On the Use of a Many core Processor for Computational Fluid Dynamics Simulations On the Use of a Many core Processor for Computational Fluid Dynamics Simulations Sebastian Raase, Tomas Nordström Halmstad University, Sweden {sebastian.raase,tomas.nordstrom} @ hh.se Preface based on

More information

A parallel finite element multigrid framework for geodynamic simulations with more than ten trillion unknowns

A parallel finite element multigrid framework for geodynamic simulations with more than ten trillion unknowns A parallel finite element multigrid framework for geodynamic simulations with more than ten trillion unknowns Dominik Bartuschat, Ulrich Rüde Chair for System Simulation, University of Erlangen-Nürnberg

More information

MSc. Thesis Project. Simulation of a Rotary Kiln. MSc. Cand.: Miguel A. Romero Advisor: Dr. Domenico Lahaye. Challenge the future

MSc. Thesis Project. Simulation of a Rotary Kiln. MSc. Cand.: Miguel A. Romero Advisor: Dr. Domenico Lahaye. Challenge the future MSc. Thesis Project Simulation of a Rotary Kiln MSc. Cand.: Miguel A. Romero Advisor: Dr. Domenico Lahaye 1 Problem Description What is a Rotary Kiln? A Rotary Kiln is a pyroprocessing device used to raise

More information

Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS. Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano

Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS. Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano Parallel PIPS-SBB Multi-level parallelism for 2-stage SMIPS Lluís-Miquel Munguia, Geoffrey M. Oxberry, Deepak Rajan, Yuji Shinano ... Our contribution PIPS-PSBB*: Multi-level parallelism for Stochastic

More information

Multiphase Flow Simulations in Inclined Tubes with Lattice Boltzmann Method on GPU

Multiphase Flow Simulations in Inclined Tubes with Lattice Boltzmann Method on GPU Multiphase Flow Simulations in Inclined Tubes with Lattice Boltzmann Method on GPU Khramtsov D.P., Nekrasov D.A., Pokusaev B.G. Department of Thermodynamics, Thermal Engineering and Energy Saving Technologies,

More information

Multiscale modeling of active fluids: selfpropellers and molecular motors. I. Pagonabarraga University of Barcelona

Multiscale modeling of active fluids: selfpropellers and molecular motors. I. Pagonabarraga University of Barcelona Multiscale modeling of active fluids: selfpropellers and molecular motors I. Pagonabarraga University of Barcelona Introduction Soft materials weak interactions Self-assembly Emergence large scale structures

More information

The Finite Cell Method: High order simulation of complex structures without meshing

The Finite Cell Method: High order simulation of complex structures without meshing The Finite Cell Method: High order simulation of complex structures without meshing E. Rank, A. Düster, D. Schillinger, Z. Yang Fakultät für Bauingenieur und Vermessungswesen Technische Universität München,

More information

The Lattice Boltzmann Method for Laminar and Turbulent Channel Flows

The Lattice Boltzmann Method for Laminar and Turbulent Channel Flows The Lattice Boltzmann Method for Laminar and Turbulent Channel Flows Vanja Zecevic, Michael Kirkpatrick and Steven Armfield Department of Aerospace Mechanical & Mechatronic Engineering The University of

More information

Massively parallel molecular-continuum simulations with the macro-micro-coupling tool Neumann, P.; Harting, J.D.R.

Massively parallel molecular-continuum simulations with the macro-micro-coupling tool Neumann, P.; Harting, J.D.R. Massively parallel molecular-continuum simulations with the macro-micro-coupling tool Neumann, P.; Harting, J.D.R. Published in: Hybrid particle-continuum methods in computational materials physics, 4-7

More information

Available online at ScienceDirect. Procedia Engineering 61 (2013 ) 94 99

Available online at  ScienceDirect. Procedia Engineering 61 (2013 ) 94 99 Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 6 (203 ) 94 99 Parallel Computational Fluid Dynamics Conference (ParCFD203) Simulations of three-dimensional cavity flows with

More information

ERLANGEN REGIONAL COMPUTING CENTER

ERLANGEN REGIONAL COMPUTING CENTER ERLANGEN REGIONAL COMPUTING CENTER Making Sense of Performance Numbers Georg Hager Erlangen Regional Computing Center (RRZE) Friedrich-Alexander-Universität Erlangen-Nürnberg OpenMPCon 2018 Barcelona,

More information

The Lattice Boltzmann Simulation on Multi-GPU Systems

The Lattice Boltzmann Simulation on Multi-GPU Systems The Lattice Boltzmann Simulation on Multi-GPU Systems Thor Kristian Valderhaug Master of Science in Computer Science Submission date: June 2011 Supervisor: Anne Cathrine Elster, IDI Norwegian University

More information

Emergence of collective dynamics in active biological systems -- Swimming micro-organisms --

Emergence of collective dynamics in active biological systems -- Swimming micro-organisms -- 12/08/2015, YITP, Kyoto Emergence of collective dynamics in active biological systems -- Swimming micro-organisms -- Norihiro Oyama John J. Molina Ryoichi Yamamoto* Department of Chemical Engineering,

More information

SPARSE SOLVERS POISSON EQUATION. Margreet Nool. November 9, 2015 FOR THE. CWI, Multiscale Dynamics

SPARSE SOLVERS POISSON EQUATION. Margreet Nool. November 9, 2015 FOR THE. CWI, Multiscale Dynamics SPARSE SOLVERS FOR THE POISSON EQUATION Margreet Nool CWI, Multiscale Dynamics November 9, 2015 OUTLINE OF THIS TALK 1 FISHPACK, LAPACK, PARDISO 2 SYSTEM OVERVIEW OF CARTESIUS 3 POISSON EQUATION 4 SOLVERS

More information

Lattice-Boltzmann vs. Navier-Stokes simulation of particulate flows

Lattice-Boltzmann vs. Navier-Stokes simulation of particulate flows Lattice-Boltzmann vs. Navier-Stokes simulation of particulate flows Amir Eshghinejadfard, Abouelmagd Abdelsamie, Dominique Thévenin University of Magdeburg, Germany 14th Workshop on Two-Phase Flow Predictions

More information

Sustained Petascale Performance of Seismic Simulations with SeisSol

Sustained Petascale Performance of Seismic Simulations with SeisSol SIAM EX Workshop on Exascale Applied Mathematics Challenges and Opportunities Sustained Petascale Performance of Seismic Simulations with SeisSol M. Bader, A. Breuer, A. Heinecke, S. Rettenberger C. Pelties,

More information

Cactus Tools for Petascale Computing

Cactus Tools for Petascale Computing Cactus Tools for Petascale Computing Erik Schnetter Reno, November 2007 Gamma Ray Bursts ~10 7 km He Protoneutron Star Accretion Collapse to a Black Hole Jet Formation and Sustainment Fe-group nuclei Si

More information

Multiscale simulations of complex fluid rheology

Multiscale simulations of complex fluid rheology Multiscale simulations of complex fluid rheology Michael P. Howard, Athanassios Z. Panagiotopoulos Department of Chemical and Biological Engineering, Princeton University Arash Nikoubashman Institute of

More information

DNS of colloidal dispersions using the smoothed profile method: formulation and applications

DNS of colloidal dispersions using the smoothed profile method: formulation and applications Hokusai, 1831 Workshop III: High Performance and Parallel Computing Methods and Algorithms for Multiphase/Complex Fluids Institute for Mathematical Sciences, NUS, Singapore 2 6 March 2015 DNS of colloidal

More information

Fluid-soil multiphase flow simulation by an SPH-DEM coupled method

Fluid-soil multiphase flow simulation by an SPH-DEM coupled method Fluid-soil multiphase flow simulation by an SPH-DEM coupled method *Kensuke Harasaki 1) and Mitsuteru Asai 2) 1), 2) Department of Civil and Structural Engineering, Kyushu University, 744 Motooka, Nishi-ku,

More information

p + µ 2 u =0= σ (1) and

p + µ 2 u =0= σ (1) and Master SdI mention M2FA, Fluid Mechanics program Hydrodynamics. P.-Y. Lagrée and S. Zaleski. Test December 4, 2009 All documentation is authorized except for reference [1]. Internet access is not allowed.

More information

PoS(LAT2009)061. Monte Carlo approach to turbulence

PoS(LAT2009)061. Monte Carlo approach to turbulence P. Düben a, D. Homeier a, K. Jansen b, D. Mesterhazy c and a a Westfälische Wilhelms-Universität, Intitut für Theoretische Physik Wilhelm-Klemm-Str. 9, 48149 Münster, Germany b NIC, DESY Zeuthen Platanenallee

More information

Gas Turbine Technologies Torino (Italy) 26 January 2006

Gas Turbine Technologies Torino (Italy) 26 January 2006 Pore Scale Mesoscopic Modeling of Reactive Mixtures in the Porous Media for SOFC Application: Physical Model, Numerical Code Development and Preliminary Validation Michele CALI, Pietro ASINARI Dipartimento

More information

arxiv: v1 [cs.ce] 23 Jan 2015

arxiv: v1 [cs.ce] 23 Jan 2015 Computational Particle Mechanics manuscript No. (will be inserted by the editor Ultrascale Simulations of Non-smooth Granular Dynamics Tobias Preclik Ulrich Rüde arxiv:50.0580v [cs.ce] 23 Jan 205 Received:

More information

Computational Numerical Integration for Spherical Quadratures. Verified by the Boltzmann Equation

Computational Numerical Integration for Spherical Quadratures. Verified by the Boltzmann Equation Computational Numerical Integration for Spherical Quadratures Verified by the Boltzmann Equation Huston Rogers. 1 Glenn Brook, Mentor 2 Greg Peterson, Mentor 2 1 The University of Alabama 2 Joint Institute

More information

Performance evaluation of scalable optoelectronics application on large-scale Knights Landing cluster

Performance evaluation of scalable optoelectronics application on large-scale Knights Landing cluster Performance evaluation of scalable optoelectronics application on large-scale Knights Landing cluster Yuta Hirokawa Graduate School of Systems and Information Engineering, University of Tsukuba hirokawa@hpcs.cs.tsukuba.ac.jp

More information

GPU-accelerated Computing at Scale. Dirk Pleiter I GTC Europe 10 October 2018

GPU-accelerated Computing at Scale. Dirk Pleiter I GTC Europe 10 October 2018 GPU-accelerated Computing at Scale irk Pleiter I GTC Europe 10 October 2018 Outline Supercomputers at JSC Future science challenges Outlook and conclusions 2 3 Supercomputers at JSC JUQUEEN (until 2018)

More information

Direct Self-Consistent Field Computations on GPU Clusters

Direct Self-Consistent Field Computations on GPU Clusters Direct Self-Consistent Field Computations on GPU Clusters Guochun Shi, Volodymyr Kindratenko National Center for Supercomputing Applications University of Illinois at UrbanaChampaign Ivan Ufimtsev, Todd

More information

Simulation of Lid-driven Cavity Flow by Parallel Implementation of Lattice Boltzmann Method on GPUs

Simulation of Lid-driven Cavity Flow by Parallel Implementation of Lattice Boltzmann Method on GPUs Simulation of Lid-driven Cavity Flow by Parallel Implementation of Lattice Boltzmann Method on GPUs S. Berat Çelik 1, Cüneyt Sert 2, Barbaros ÇETN 3 1,2 METU, Mechanical Engineering, Ankara, TURKEY 3 METU-NCC,

More information

SPECIAL PROJECT PROGRESS REPORT

SPECIAL PROJECT PROGRESS REPORT SPECIAL PROJECT PROGRESS REPORT Progress Reports should be 2 to 10 pages in length, depending on importance of the project. All the following mandatory information needs to be provided. Reporting year

More information

The Blue Gene/P at Jülich Case Study & Optimization. W.Frings, Forschungszentrum Jülich,

The Blue Gene/P at Jülich Case Study & Optimization. W.Frings, Forschungszentrum Jülich, The Blue Gene/P at Jülich Case Study & Optimization W.Frings, Forschungszentrum Jülich, 26.08.2008 Jugene Case-Studies: Overview Case Study: PEPC Case Study: racoon Case Study: QCD CPU0CPU3 CPU1CPU2 2

More information

Game Physics. Game and Media Technology Master Program - Utrecht University. Dr. Nicolas Pronost

Game Physics. Game and Media Technology Master Program - Utrecht University. Dr. Nicolas Pronost Game and Media Technology Master Program - Utrecht University Dr. Nicolas Pronost Essential physics for game developers Introduction The primary issues Let s move virtual objects Kinematics: description

More information

Domain Decomposition-based contour integration eigenvalue solvers

Domain Decomposition-based contour integration eigenvalue solvers Domain Decomposition-based contour integration eigenvalue solvers Vassilis Kalantzis joint work with Yousef Saad Computer Science and Engineering Department University of Minnesota - Twin Cities, USA SIAM

More information

Lattice Boltzmann Simulation of Turbulent Flow Laden with Finite-Size Particles

Lattice Boltzmann Simulation of Turbulent Flow Laden with Finite-Size Particles Revised Manuscript Click here to view linked References Lattice Boltzmann Simulation of Turbulent Flow Laden with Finite-Size Particles Hui Gao, Hui Li, Lian-Ping Wang Department of Mechanical Engineering,

More information

591 TFLOPS Multi-TRILLION Particles Simulation on SuperMUC

591 TFLOPS Multi-TRILLION Particles Simulation on SuperMUC International Supercomputing Conference 2013 591 TFLOPS Multi-TRILLION Particles Simulation on SuperMUC W. Eckhardt TUM, A. Heinecke TUM, R. Bader LRZ, M. Brehm LRZ, N. Hammer LRZ, H. Huber LRZ, H.-G.

More information

Review for the Midterm Exam

Review for the Midterm Exam Review for the Midterm Exam 1 Three Questions of the Computational Science Prelim scaled speedup network topologies work stealing 2 The in-class Spring 2012 Midterm Exam pleasingly parallel computations

More information

A Momentum Exchange-based Immersed Boundary-Lattice. Boltzmann Method for Fluid Structure Interaction

A Momentum Exchange-based Immersed Boundary-Lattice. Boltzmann Method for Fluid Structure Interaction APCOM & ISCM -4 th December, 03, Singapore A Momentum Exchange-based Immersed Boundary-Lattice Boltzmann Method for Fluid Structure Interaction Jianfei Yang,,3, Zhengdao Wang,,3, and *Yuehong Qian,,3,4

More information

GPU Computing Activities in KISTI

GPU Computing Activities in KISTI International Advanced Research Workshop on High Performance Computing, Grids and Clouds 2010 June 21~June 25 2010, Cetraro, Italy HPC Infrastructure and GPU Computing Activities in KISTI Hongsuk Yi hsyi@kisti.re.kr

More information

Performance Analysis of Lattice QCD Application with APGAS Programming Model

Performance Analysis of Lattice QCD Application with APGAS Programming Model Performance Analysis of Lattice QCD Application with APGAS Programming Model Koichi Shirahata 1, Jun Doi 2, Mikio Takeuchi 2 1: Tokyo Institute of Technology 2: IBM Research - Tokyo Programming Models

More information

Experience with DNS of particulate flow using a variant of the immersed boundary method

Experience with DNS of particulate flow using a variant of the immersed boundary method Experience with DNS of particulate flow using a variant of the immersed boundary method Markus Uhlmann Numerical Simulation and Modeling Unit CIEMAT Madrid, Spain ECCOMAS CFD 2006 Motivation wide range

More information

Scalable Hybrid Programming and Performance for SuperLU Sparse Direct Solver

Scalable Hybrid Programming and Performance for SuperLU Sparse Direct Solver Scalable Hybrid Programming and Performance for SuperLU Sparse Direct Solver Sherry Li Lawrence Berkeley National Laboratory Piyush Sao Rich Vuduc Georgia Institute of Technology CUG 14, May 4-8, 14, Lugano,

More information

Unsteady CFD for Automotive Aerodynamics

Unsteady CFD for Automotive Aerodynamics Unsteady CFD for Automotive Aerodynamics T. Indinger, B. Schnepf, P. Nathen, M. Peichl, TU München, Institute of Aerodynamics and Fluid Mechanics Prof. Dr.-Ing. N.A. Adams Outline 2 Motivation Applications

More information

The... of a particle is defined as its change in position in some time interval.

The... of a particle is defined as its change in position in some time interval. Distance is the. of a path followed by a particle. Distance is a quantity. The... of a particle is defined as its change in position in some time interval. Displacement is a.. quantity. The... of a particle

More information

Optimising PICCANTE an Open Source Particle-in-Cell Code for Advanced Simulations on Tier-0 Systems

Optimising PICCANTE an Open Source Particle-in-Cell Code for Advanced Simulations on Tier-0 Systems Available online at www.prace-ri.eu Partnership for Advanced Computing in Europe Optimising PICCANTE an Open Source Particle-in-Cell Code for Advanced Simulations on Tier-0 Systems A. Sgattoni a, L. Fedeli

More information

Advantages of a Finite Extensible Nonlinear Elastic Potential in Lattice Boltzmann Simulations

Advantages of a Finite Extensible Nonlinear Elastic Potential in Lattice Boltzmann Simulations The Hilltop Review Volume 7 Issue 1 Winter 2014 Article 10 December 2014 Advantages of a Finite Extensible Nonlinear Elastic Potential in Lattice Boltzmann Simulations Tai-Hsien Wu Western Michigan University

More information

Micromechanical Modeling of Discontinuous Shear Thickening in Granular Media-Fluid

Micromechanical Modeling of Discontinuous Shear Thickening in Granular Media-Fluid 1 2 Micromechanical Modeling of Discontinuous Shear Thickening in Granular Media-Fluid Suspension 3 4 Daniel H. Johnson a, Farshid Vahedifard b, Bohumir Jelinek c, John F. Peters d 5 6 7 8 9 10 11 12 13

More information

Optimal locomotion at low Reynolds number

Optimal locomotion at low Reynolds number Optimal locomotion at low Reynolds number François Alouges 1 joint work with A. DeSimone, A. Lefebvre, L. Heltai et B. Merlet 1 CMAP Ecole Polytechnique March 2010 Motivation Aim Understanding swimming

More information

Mesoscale fluid simulation of colloidal systems

Mesoscale fluid simulation of colloidal systems Mesoscale fluid simulation of colloidal systems Mingcheng Yang Institute of Physics, CAS Outline (I) Background (II) Simulation method (III) Applications and examples (IV) Summary Background Soft matter

More information

On Portability, Performance and Scalability of a MPI OpenCL Lattice Boltzmann Code

On Portability, Performance and Scalability of a MPI OpenCL Lattice Boltzmann Code On Portability, Performance and Scalability of a MPI OpenCL Lattice Boltzmann Code E Calore, S F Schifano, R Tripiccione Enrico Calore INFN Ferrara, Italy 7 th Workshop on UnConventional High Performance

More information

Appendix A Prototypes Models

Appendix A Prototypes Models Appendix A Prototypes Models This appendix describes the model of the prototypes used in Chap. 3. These mathematical models can also be found in the Student Handout by Quanser. A.1 The QUANSER SRV-02 Setup

More information

Multiphase Flows. Mohammed Azhar Phil Stopford

Multiphase Flows. Mohammed Azhar Phil Stopford Multiphase Flows Mohammed Azhar Phil Stopford 1 Outline VOF Model VOF Coupled Solver Free surface flow applications Eulerian Model DQMOM Boiling Model enhancements Multi-fluid flow applications Coupled

More information

HPMPC - A new software package with efficient solvers for Model Predictive Control

HPMPC - A new software package with efficient solvers for Model Predictive Control - A new software package with efficient solvers for Model Predictive Control Technical University of Denmark CITIES Second General Consortium Meeting, DTU, Lyngby Campus, 26-27 May 2015 Introduction Model

More information

NOAA Research and Development High Performance Compu3ng Office Craig Tierney, U. of Colorado at Boulder Leslie Hart, NOAA CIO Office

NOAA Research and Development High Performance Compu3ng Office Craig Tierney, U. of Colorado at Boulder Leslie Hart, NOAA CIO Office A survey of performance characteris3cs of NOAA s weather and climate codes across our HPC systems NOAA Research and Development High Performance Compu3ng Office Craig Tierney, U. of Colorado at Boulder

More information

Simulation of Particulate Solids Processing Using Discrete Element Method Oleh Baran

Simulation of Particulate Solids Processing Using Discrete Element Method Oleh Baran Simulation of Particulate Solids Processing Using Discrete Element Method Oleh Baran Outline DEM overview DEM capabilities in STAR-CCM+ Particle types and injectors Contact physics Coupling to fluid flow

More information

Collision Resolution

Collision Resolution Collision Resolution Our Problem Collision detection (supposedly) reported a collision. We want to solve it, i.e. move back the colliding objects apart from each other. In which direction and with what

More information

Event-Driven Molecular Dynamics for Non-Convex Polyhedra

Event-Driven Molecular Dynamics for Non-Convex Polyhedra Master Thesis: Mathematical Sciences Event-Driven Molecular Dynamics for Non-Convex Polyhedra Author: Marjolein van der Meer Supervisors: prof. dr. Rob Bisseling prof. dr. Marjolein Dijkstra Daily supervisors:

More information

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University Che-Wei Chang chewei@mail.cgu.edu.tw Department of Computer Science and Information Engineering, Chang Gung University } 2017/11/15 Midterm } 2017/11/22 Final Project Announcement 2 1. Introduction 2.

More information

arxiv:cond-mat/ v1 [cond-mat.soft] 22 Jan 2007

arxiv:cond-mat/ v1 [cond-mat.soft] 22 Jan 2007 Modeling microscopic swimmers at low Reynolds number David J. Earl Dept. of Chemistry, University of Pittsburgh, 219 Parkman Avenue, Pittsburgh, PA 1526. arxiv:cond-mat/71511v1 [cond-mat.soft] 22 Jan 27

More information

Parallel Algorithms for Solution of Large Sparse Linear Systems with Applications

Parallel Algorithms for Solution of Large Sparse Linear Systems with Applications Parallel Algorithms for Solution of Large Sparse Linear Systems with Applications Murat Manguoğlu Department of Computer Engineering Middle East Technical University, Ankara, Turkey Prace workshop: HPC

More information

WRF performance tuning for the Intel Woodcrest Processor

WRF performance tuning for the Intel Woodcrest Processor WRF performance tuning for the Intel Woodcrest Processor A. Semenov, T. Kashevarova, P. Mankevich, D. Shkurko, K. Arturov, N. Panov Intel Corp., pr. ak. Lavrentieva 6/1, Novosibirsk, Russia, 630090 {alexander.l.semenov,tamara.p.kashevarova,pavel.v.mankevich,

More information

Deutscher Wetterdienst

Deutscher Wetterdienst Deutscher Wetterdienst The Enhanced DWD-RAPS Suite Testing Computers, Compilers and More? Ulrich Schättler, Florian Prill, Harald Anlauf Deutscher Wetterdienst Research and Development Deutscher Wetterdienst

More information

Chapter 13. Simple Harmonic Motion

Chapter 13. Simple Harmonic Motion Chapter 13 Simple Harmonic Motion Hooke s Law F s = - k x F s is the spring force k is the spring constant It is a measure of the stiffness of the spring A large k indicates a stiff spring and a small

More information

Performance Evaluation of Scientific Applications on POWER8

Performance Evaluation of Scientific Applications on POWER8 Performance Evaluation of Scientific Applications on POWER8 2014 Nov 16 Andrew V. Adinetz 1, Paul F. Baumeister 1, Hans Böttiger 3, Thorsten Hater 1, Thilo Maurer 3, Dirk Pleiter 1, Wolfram Schenck 4,

More information

A Generalized Maximum Dissipation Principle in an Impulse-velocity Time-stepping Scheme

A Generalized Maximum Dissipation Principle in an Impulse-velocity Time-stepping Scheme A Generalized Maximum Dissipation Principle in an Impulse-velocity Time-stepping Scheme T. Preclik, U. Rüde September 2, 213 Chair of Computer Science 1 (System Simulation) University of Erlangen-Nürnberg,

More information

上海超级计算中心 Shanghai Supercomputer Center. Lei Xu Shanghai Supercomputer Center San Jose

上海超级计算中心 Shanghai Supercomputer Center. Lei Xu Shanghai Supercomputer Center San Jose 上海超级计算中心 Shanghai Supercomputer Center Lei Xu Shanghai Supercomputer Center 03/26/2014 @GTC, San Jose Overview Introduction Fundamentals of the FDTD method Implementation of 3D UPML-FDTD algorithm on GPU

More information

J.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1. March, 2009

J.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1. March, 2009 Parallel Preconditioning of Linear Systems based on ILUPACK for Multithreaded Architectures J.I. Aliaga M. Bollhöfer 2 A.F. Martín E.S. Quintana-Ortí Deparment of Computer Science and Engineering, Univ.

More information

Computers and Mathematics with Applications. Investigation of the LES WALE turbulence model within the lattice Boltzmann framework

Computers and Mathematics with Applications. Investigation of the LES WALE turbulence model within the lattice Boltzmann framework Computers and Mathematics with Applications 59 (2010) 2200 2214 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa Investigation

More information

Hybrid parallelization of a pseudo-spectral DNS code and its computational performance on RZG s idataplex system Hydra

Hybrid parallelization of a pseudo-spectral DNS code and its computational performance on RZG s idataplex system Hydra Hybrid parallelization of a pseudo-spectral DNS code and its computational performance on RZG s idataplex system Hydra Markus Rampp 1, Liang Shi 2, Marc Avila 3,2, Björn Hof 2,4 1 Computing Center of the

More information

is acting on a body of mass m = 3.0 kg and changes its velocity from an initial

is acting on a body of mass m = 3.0 kg and changes its velocity from an initial PHYS 101 second major Exam Term 102 (Zero Version) Q1. A 15.0-kg block is pulled over a rough, horizontal surface by a constant force of 70.0 N acting at an angle of 20.0 above the horizontal. The block

More information

A Quantum Chemistry Domain-Specific Language for Heterogeneous Clusters

A Quantum Chemistry Domain-Specific Language for Heterogeneous Clusters A Quantum Chemistry Domain-Specific Language for Heterogeneous Clusters ANTONINO TUMEO, ORESTE VILLA Collaborators: Karol Kowalski, Sriram Krishnamoorthy, Wenjing Ma, Simone Secchi May 15, 2012 1 Outline!

More information

Lattice Quantum Chromodynamics on the MIC architectures

Lattice Quantum Chromodynamics on the MIC architectures Lattice Quantum Chromodynamics on the MIC architectures Piotr Korcyl Universität Regensburg Intel MIC Programming Workshop @ LRZ 28 June 2017 Piotr Korcyl Lattice Quantum Chromodynamics on the MIC 1/ 25

More information

Computation of Unsteady Flows With Moving Grids

Computation of Unsteady Flows With Moving Grids Computation of Unsteady Flows With Moving Grids Milovan Perić CoMeT Continuum Mechanics Technologies GmbH milovan@continuummechanicstechnologies.de Unsteady Flows With Moving Boundaries, I Unsteady flows

More information

Direct Numerical Simulation of fractal-generated turbulence

Direct Numerical Simulation of fractal-generated turbulence Direct Numerical Simulation of fractal-generated turbulence S. Laizet and J.C. Vassilicos Turbulence, Mixing and Flow Control Group, Department of Aeronautics and Institute for Mathematical Sciences, Imperial

More information

HYCOM and Navy ESPC Future High Performance Computing Needs. Alan J. Wallcraft. COAPS Short Seminar November 6, 2017

HYCOM and Navy ESPC Future High Performance Computing Needs. Alan J. Wallcraft. COAPS Short Seminar November 6, 2017 HYCOM and Navy ESPC Future High Performance Computing Needs Alan J. Wallcraft COAPS Short Seminar November 6, 2017 Forecasting Architectural Trends 3 NAVY OPERATIONAL GLOBAL OCEAN PREDICTION Trend is higher

More information

Parallel spacetime approach to turbulence: computation of unstable periodic orbits and the dynamical zeta function

Parallel spacetime approach to turbulence: computation of unstable periodic orbits and the dynamical zeta function Parallel spacetime approach to turbulence: computation of unstable periodic orbits and the dynamical zeta function Peter V. Coveney 1, Bruce M. Boghosian 2, Luis Fazendeiro 1, Derek Groen 1 Lorentz Center

More information

GAME PHYSICS ENGINE DEVELOPMENT

GAME PHYSICS ENGINE DEVELOPMENT GAME PHYSICS ENGINE DEVELOPMENT IAN MILLINGTON i > AMSTERDAM BOSTON HEIDELBERG fpf l LONDON. NEW YORK. OXFORD ^. PARIS SAN DIEGO SAN FRANCISCO втс^н Г^ 4.«Mt-fSSKHbe. SINGAPORE. SYDNEY. TOKYO ELSEVIER

More information

- Part 4 - Multicore and Manycore Technology: Chances and Challenges. Vincent Heuveline

- Part 4 - Multicore and Manycore Technology: Chances and Challenges. Vincent Heuveline - Part 4 - Multicore and Manycore Technology: Chances and Challenges Vincent Heuveline 1 Numerical Simulation of Tropical Cyclones Goal oriented adaptivity for tropical cyclones ~10⁴km ~1500km ~100km 2

More information

Scientific Computing II

Scientific Computing II Scientific Computing II Molecular Dynamics Simulation Michael Bader SCCS Summer Term 2015 Molecular Dynamics Simulation, Summer Term 2015 1 Continuum Mechanics for Fluid Mechanics? Molecular Dynamics the

More information

GRANULAR DYNAMICS ON ASTEROIDS

GRANULAR DYNAMICS ON ASTEROIDS June 16, 2011 Granular Flows Summer School Richardson Lecture 2 GRANULAR DYNAMICS ON ASTEROIDS Derek C. Richardson University of Maryland June 16, 2011 Granular Flows Summer School Richardson Lecture 2

More information

More Science per Joule: Bottleneck Computing

More Science per Joule: Bottleneck Computing More Science per Joule: Bottleneck Computing Georg Hager Erlangen Regional Computing Center (RRZE) University of Erlangen-Nuremberg Germany PPAM 2013 September 9, 2013 Warsaw, Poland Motivation (1): Scalability

More information

Information Sciences Institute 22 June 2012 Bob Lucas, Gene Wagenbreth, Dan Davis, Roger Grimes and

Information Sciences Institute 22 June 2012 Bob Lucas, Gene Wagenbreth, Dan Davis, Roger Grimes and Accelerating the Multifrontal Method Information Sciences Institute 22 June 2012 Bob Lucas, Gene Wagenbreth, Dan Davis, Roger Grimes {rflucas,genew,ddavis}@isi.edu and grimes@lstc.com 3D Finite Element

More information

Numerical simulation of fluid flow in a monolithic exchanger related to high temperature and high pressure operating conditions

Numerical simulation of fluid flow in a monolithic exchanger related to high temperature and high pressure operating conditions Advanced Computational Methods in Heat Transfer X 25 Numerical simulation of fluid flow in a monolithic exchanger related to high temperature and high pressure operating conditions F. Selimovic & B. Sundén

More information

Passive locomotion via normal mode. coupling in a submerged spring-mass system

Passive locomotion via normal mode. coupling in a submerged spring-mass system Under consideration for publication in J. Fluid Mech. Passive locomotion via normal mode coupling in a submerged spring-mass system By E V A K A N S O A N D P A U L K N E W T O N,2 Department of Aerospace

More information

Visual Interactive Simulation, TDBD24, Spring 2006

Visual Interactive Simulation, TDBD24, Spring 2006 Visual Interactive Simulation, TDBD24, Spring 2006 Lecture 6 Outline Cross product as matrix multiplication The collision matrix The Newton Coulomb impulse for rigid bodies N C updates of translational

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION SUPPLEMENTARY INFORMATION DOI: 10.1038/NGEO1887 Diverse calving patterns linked to glacier geometry J. N. Bassis and S. Jacobs 1. Supplementary Figures (a) (b) (c) Supplementary Figure S1 Schematic of

More information

Pairwise Interaction Extended Point-Particle (PIEP) Model for droplet-laden flows: Towards application to the mid-field of a spray

Pairwise Interaction Extended Point-Particle (PIEP) Model for droplet-laden flows: Towards application to the mid-field of a spray Pairwise Interaction Extended Point-Particle (PIEP) Model for droplet-laden flows: Towards application to the mid-field of a spray Georges Akiki, Kai Liu and S. Balachandar * Department of Mechanical &

More information

High performance mantle convection modeling

High performance mantle convection modeling High performance mantle convection modeling Jens Weismüller, Björn Gmeiner, Siavash Ghelichkhan, Markus Huber, Lorenz John, Barbara Wohlmuth, Ulrich Rüde and Hans-Peter Bunge Geophysics Department of Earth-

More information

MPI at MPI. Jens Saak. Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

MPI at MPI. Jens Saak. Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory MAX PLANCK INSTITUTE November 5, 2010 MPI at MPI Jens Saak Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory FOR DYNAMICS OF COMPLEX TECHNICAL

More information