Investigation of an Unusual Phase Transition Freezing on heating of liquid solution

Similar documents
Stochastic Modelling of Electron Transport on different HPC architectures

Efficient Parallelization of Molecular Dynamics Simulations on Hybrid CPU/GPU Supercoputers

High-Performance Computing and Groundbreaking Applications

Introduction to Benchmark Test for Multi-scale Computational Materials Software

Quantum Chemical Calculations by Parallel Computer from Commodity PC Components

JOINT WMO TECHNICAL PROGRESS REPORT ON THE GLOBAL DATA PROCESSING AND FORECASTING SYSTEM AND NUMERICAL WEATHER PREDICTION RESEARCH ACTIVITIES FOR 2006

Is there a future for quantum chemistry on supercomputers? Jürg Hutter Physical-Chemistry Institute, University of Zurich

MPI at MPI. Jens Saak. Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory

JOINT WMO TECHNICAL PROGRESS REPORT ON THE GLOBAL DATA PROCESSING AND FORECASTING SYSTEM AND NUMERICAL WEATHER PREDICTION RESEARCH ACTIVITIES FOR 2007

One Optimized I/O Configuration per HPC Application

Using AmgX to accelerate a PETSc-based immersed-boundary method code

GPU Computing Activities in KISTI

XXL-BIOMD. Large Scale Biomolecular Dynamics Simulations. onsdag, 2009 maj 13

Hands-on Course in Computational Structural Biology and Molecular Simulation BIOP590C/MCB590C. Course Details

Benchmark of the CPMD code on CRESCO HPC Facilities for Numerical Simulation of a Magnesium Nanoparticle.

Piz Daint & Piz Kesch : from general purpose supercomputing to an appliance for weather forecasting. Thomas C. Schulthess

LAMMPS Performance Benchmark on VSC-1 and VSC-2

WRF performance tuning for the Intel Woodcrest Processor

Computationally Efficient Analysis of Large Array FTIR Data In Chemical Reaction Studies Using Distributed Computing Strategy

Presentation Outline

A Survey of HPC Systems and Applications in Europe

Modeling and visualization of molecular dynamic processes

Domain Decomposition-based contour integration eigenvalue solvers

Direct Self-Consistent Field Computations on GPU Clusters

Q-Chem 4.0: Expanding the Frontiers. Jing Kong Q-Chem Inc. Pittsburgh, PA

Deutscher Wetterdienst

A Survey of HPC Usage in Europe and the PRACE Benchmark Suite

The Green Index (TGI): A Metric for Evalua:ng Energy Efficiency in HPC Systems

Parallel Polynomial Evaluation

HYCOM and Navy ESPC Future High Performance Computing Needs. Alan J. Wallcraft. COAPS Short Seminar November 6, 2017

Quantum ESPRESSO Performance Benchmark and Profiling. February 2017

How is molecular dynamics being used in life sciences? Davide Branduardi

Petascale Quantum Simulations of Nano Systems and Biomolecules

Scalable and Power-Efficient Data Mining Kernels

The Memory Intensive System

A Spatial Data Infrastructure for Landslides and Floods in Italy

Improving the solar cells efficiency of metal Sulfide thin-film nanostructure via first principle calculations

Current address: Department of Chemistry, Hong Kong Baptist University, Kowloon Tong, Hong Kong,

Parallel Performance Studies for a Numerical Simulator of Atomic Layer Deposition Michael J. Reid

Course Introduction. SASSIE CCP-SAS Workshop. January 23-25, ISIS Neutron and Muon Source Rutherford Appleton Laboratory UK

The Fast Multipole Method in molecular dynamics

NUMERICAL SIMULATION OF THE IR SPECTRA OF DNA BASES

Computational Structural Biology and Molecular Simulation. Introduction to VMD Molecular Visualization and Analysis

A Brief Guide to All Atom/Coarse Grain Simulations 1.1

LIGO Grid Applications Development Within GriPhyN

Molecular Dynamics Simulation of a Biomolecule with High Speed, Low Power and Accuracy Using GPU-Accelerated TSUBAME2.

VMware VMmark V1.1 Results

Neutron and X-ray Scattering Studies

Winmostar tutorial LAMMPS Melting point V X-Ability Co,. Ltd. 2017/8/17

Cluster Computing: Updraft. Charles Reid Scientific Computing Summer Workshop June 29, 2010

Accelerating Three-Body Potentials using GPUs NVIDIA Tesla K20X

Applications available on FERMI

CRYSTAL in parallel: replicated and distributed (MPP) data. Why parallel?

Atomistic Modeling of Small-Angle Scattering Data Using SASSIE-web

Weather Research and Forecasting (WRF) Performance Benchmark and Profiling. July 2012

Some thoughts about energy efficient application execution on NEC LX Series compute clusters

Modelling and computer simulation of nanostructured devices

Institute for Functional Imaging of Materials (IFIM)

MONK Under-sampling bias calculations for benchmark S2 - Initial results. Presented by PAUL SMITH EGAMCT Meeting, Paris 6 th July 2016

Improved Adaptive Resolution Molecular Dynamics Simulation

Amalendu Chandra. Department of Chemistry and Computer Centre.

A Data Communication Reliability and Trustability Study for Cluster Computing

JASS Modeling and visualization of molecular dynamic processes

Development of Molecular Dynamics Simulation System for Large-Scale Supra-Biomolecules, PABIOS (PArallel BIOmolecular Simulator)

Multi-phase Spin crossover in Fe(ptz) 6 (BF 4 ) 2

Cleaner Car Exhausts Using Ion-Exchanged Zeolites: Insights From Atomistic Simulations

Exploring the Changes in the Structure of α-helical Peptides Adsorbed onto Carbon and Boron Nitride based Nanomaterials

APPLICATION OF CUDA TECHNOLOGY FOR CALCULATION OF GROUND STATES OF FEW-BODY NUCLEI BY FEYNMAN'S CONTINUAL INTEGRALS METHOD

CAEFEM v9.5 Information

FENZI: GPU-enabled Molecular Dynamics Simulations of Large Membrane Regions based on the CHARMM force field and PME

Performance Evaluation of MPI on Weather and Hydrological Models

Fragmentation methods

Massively parallel semi-lagrangian solution of the 6d Vlasov-Poisson problem

RWTH Aachen University

Parallelization of the Molecular Orbital Program MOS-F

Parallelization of Molecular Dynamics (with focus on Gromacs) SeSE 2014 p.1/29

Jack Smith. Center for Environmental, Geotechnical and Applied Science. Marshall University

Julian Merten. GPU Computing and Alternative Architecture

The QMC Petascale Project

pka Calculations of Key Ionizable Protein Residues in Acetylcholinesterase

Supercomputer Programme

From Supercomputers to GPUs

SPARSE SOLVERS POISSON EQUATION. Margreet Nool. November 9, 2015 FOR THE. CWI, Multiscale Dynamics

The Role of Annihilation in a Wigner Monte Carlo Approach

Accelerating Rosetta with OpenMM. Peter Eastman. RosettaCon, August 5, 2010

Continuous Machine Learning

Performance Analysis of Lattice QCD Application with APGAS Programming Model

Adaptive Heterogeneous Computing with OpenCL: Harnessing hundreds of GPUs and CPUs

Scalable Hybrid Programming and Performance for SuperLU Sparse Direct Solver

BUDE. A General Purpose Molecular Docking Program Using OpenCL. Richard B Sessions

An Overview of HPC at the Met Office

Technical Building Branches. Multidisciplinary Integration of Needs. Interdisciplinary Subject BUT

上海超级计算中心 Shanghai Supercomputer Center. Lei Xu Shanghai Supercomputer Center San Jose

Fast similarity searching making the virtual real. Stephen Pickett, GSK

Video: Lenovo, NVIDIA & Beckman Coulter showcase healthcare solutions

Molecular dynamics simulations of EXAFS in germanium

Towards a highly-parallel PDE-Solver using Adaptive Sparse Grids on Compute Clusters

Diffusion of Water and Diatomic Oxygen in Poly(3-hexylthiophene) Melt: A Molecular Dynamics Simulation Study

NOAA Research and Development High Performance Compu3ng Office Craig Tierney, U. of Colorado at Boulder Leslie Hart, NOAA CIO Office

RRQR Factorization Linux and Windows MEX-Files for MATLAB

Transcription:

Investigation of an Unusual Phase Transition Freezing on heating of liquid solution Calin Gabriel Floare National Institute for R&D of Isotopic and Molecular Technologies, Cluj-Napoca, Romania Max von Laue 1879-1960 Paul Langevin 1879-1946 Joseph Fourier 1768-1830

Cluj-Napoca - Romania Cluj in 1617 - Bird s eye view by Georg Houfnagel after a painting by Egidius van der Rye. The city center - air view from south-west (1930)

Our cluster Hewlett Packard Blade C7000 with 16 Proliant BL280c G6 (2 Intel Quad-core Xeon x5570 @ 2.93 GHz, 16 Gb RAM, 500 Gb HDD) running, TORQUE, MAUI, GANGLIA (http://hpc.itim-cj.ro), NAGIOS, configured from scratch - Scientific Linux 5.3 (Boron) We installed different Intel compilers, mathematical and MPI libraries We are using different Quantum chemistry codes like: AMBER, GROMACS, NAMD, LAMMPS, CPMD, CP2K, Gaussian, GAMESS, MOLPRO, DFTB+, Siesta, VASP, Accelrys Materials Studio We are hosting also the RO-14-ITIM Grid site (http://grid.itim-cj.ro)

The story of a serendipitous discovery 1 α-cyclodextrine, αcd: the association of 6 glucose units: (C 6 O 5 H 10 ) 6 4-methylpyridine, 4MP: C 6 NH 7..and a bit of water 1 M. Plazanet, C. Floare, M. R. Johnson, R. Schweins, H. P. Tommsdorff, Freezing on heating of liquid solutions, J. Chem. Phys., 121(11), 5031 (2004), ILL Annual Report 2004, 54-55 and the papers which followed.

Temperature C 80 70 Solid phase 60 50 Liquid phase 40 100 150 200 250 300 Concentration, αcd[g]/4mp[l] 200g/l ~ 1 αcd for 50 4MP

A movie by A. Filhol, Laue-Langevin Institute Azobenzene : melts at 66 o C CD-4MP : freezes at 66 o C http://www.ill.eu/about/movies/experiments/in16-a-liquid-paradox/

Concentration mg/ml 300 250 Solubility αcd in 4MP 200 150 100 50 0 40 45 50 55 60 65 70 75 80 85 90 95 100 Temperature C

How we can rationalize these surprising observations? As temperature increases, entropy must increase, how is this compatible with the observation that crystalline order is established and that molecular motions are slowed down? Characterize the changes of the structure and of the molecular dynamics by: elastic and inelastic neutron scattering neutron and X-ray diffraction, low-field NMR and molecular dynamics simulations

NEUTRON SCATTERING AT THE INSTITUTE LAUE-LANGEVIN (ILL) X-ray SCATTERING AT ESRF

a) Hysteresis-like fixed window (elastic) scan, IN10, ILL; b) Quasi-elastic neutron spectra, IN5, ILL

F. Ding and N. Dokholyan, Trends in Biotechnology 23(9) 450 (2005)

Model studied system: 2004 - NPT molecular dynamics simulations using Accelrys CERIUS 2 v4.6 with COMPASS forcefield running on different SGI workstation A periodic box with the dimensions 24Å 24Å 24Å, containing: one -CD molecule 50 molecules of 4MP 826 atoms

20 -CD molecules 1120 molecules of 4MP 240 water molecules NPT ensemble MD using AMBER9 (60 Å) 3 box 18920 atoms speed of 0.22ns day (1 core), 0.39ns day (2 cores) and 0.69 (4 cores) Infiniband is needed for a further scale up An AMBER benchmark on IBM SP5 cluster (IBM p575 Power 5, bassi.nersc.gov, 118 8-cpu nodes, 1.9 GHz Power 5+ cpu, 2 MB L2 cache, 36 MB L3 cache, 32 GB memory per node) produced 22ns/day when using 256 cores, on a system containing around 23500 atoms. Initially we have to optimize the force fields using the force-matching method 100 ns long trajectories at different temperatures must be calculated for good statistics Hydrogen-bond dynamics formation analysis and cluster Correlation coefficients This system will be studied at CINECA, Italy, on a project founded by HPC-Europa2 program on 256 CPUs

1 Million atoms Simulation Dream Amber 11 GPU performance compared with that on Kracken@ORNL

GPU Codes

Milu (Miramare Interoperable Lite User Interface), a tool to set up easily an UI on (almost) any machine (https://eforge.escience-lab.org/gf/project/milu/) BEMuSE: Bias-Exchange Metadynamics Submission Environment (https://euindia.ictp.it/bemuse/) EPICO elab Procedure for Installation and Configuration (http://epico.escience-lab.org/) Training Tools: GRID Seed (http://gridseed.escience-lab.org) Moodle Platform (http://www.moodle.org)

Running AMBER on GRID Giulio Rastelli and col. from University of Modena already deployed AMBER on GRID. Contacts were established with the institution distributing Amber regarding the license policy on the grid. The outcome of the negotiation was that we were allowed to deploy Amber on the grid under the following conditions*: Each cluster deploying Amber had to have at least one license. Grid users allowed to use Amber had to come from one of the laboratories owning an Amber license. Grid users allowed to use Amber under the conditions described above could deploy their computations on all the grid clusters. * Vincent Breton, Doman Kim, and Giulio Rastelli, WISDOM: A Grid-Enabled Drug Discovery Initiative against Malaria, in Grid Computing: Infrastructure, Service and Applications, Di Lizhe Wang,Wei Jie,Jinjun Chen, pp. 373

To know more about it : Freezing on heating of liquid solutions, M. Plazanet, C. Floare, M.R. Johnson, R. Schweins, H.P. Trommsdorff, J. Chem. Phys. 121 (2004) 5031 J. Chem. Phys. 125 (2005) 154504 Chem. Phys. 317 (2006) 153 Chem. Phys. 331 (2006) 35 J. Phys. Cond. Mat. 19 (2007) 205108 Phys. Chem. Chem. Phys. 12 (2010) 7026

PhysicsWeb, 24/09/2004 Science News, 16/10/2004 Physics World, 11/2004 ILL bulletin, 11/2004 Science et avenir, 12/2004 Science et vie, 01/2005 Geo magasine, german edition, 01/2005 http://www.scienceinschool.org/repository/docs/defying.pdf

Thank you for your attention