The Memory Intensive System
|
|
- Maryann Peters
- 5 years ago
- Views:
Transcription
1 The Memory Intensive System The DiRAC-2.5x Memory Intensive system at Durham in partnership with Dell Dr Lydia Heck, Technical Director ICC HPC and DiRAC Technical Manager 1
2 DiRAC Who we are: a U.K. National HPC facility distributed over four academic institutions Managed by Scientists for their Science Areas DiRAC Services support a significant portion of STFC s science programme, providing simulation and data modelling resources for the UK Frontier Science theory community in Particle Physics, astroparticle physics, Astrophysics, cosmology, solar system & planetary science... Dirac web pages. A partner in the National e-infrastructure under the umbrella of IRIS. We have a full management structure Management team: Director, technical director, innovation director, project scientist, technical manager programme management board (PMB): management team + scientists Oversight Committee (OSC): independent, external. Fully refereed resource allocation through the DiRAC Resource Allocation committee (RAC): managed by STFC, chaired by scientists from both Astrophysics and Particle Physics 2
3 DiRAC - History 2009 DiRAC-1: 13 Installations ( 13M) Part of DiRAC-1 COSMA4 in Durham installed in 2010 with 2976 cores Intel Westmere 2.67 GHz and 14.8 Tbyte of RAM DiRAC-2 : ( 15M) 4 sites and 5 installations: Funded by the U.K. government Department for Business, Innovation & Skills (BIS) OPEX funded by the U.K. Science and Technology Facilities Council (STFC) The procurements (OJEU) were done in less than 100 days. DiRAC-2 installations with a total peak performance of 2 PFlops From North to South: Edinburgh Bluegene (UK-QCD) QCD simulations; some astrophysical calculations were ported (IBM) 20 th of Top 500, June 2012 Durham (ICC) - COSMA Intel SandyBridge cluster Cosmological simulations; Solar system and planetary modelling; gravitational waves; Beam optimisation simulations (IBM) 134 th of Top 500, June 2012 Cambridge Cosmos Shared Memory SGI system; cosmological data modelling (SGI) Cambridge Darwin Intel SandyBridge cluster QCD, cosmological and astrophysical simulations; solar and planetary physics (Dell) 93 rd of Top 500, June 2012 Leicester Complexity Intel SandyBridge Cluster - astrophysical simulations; solar and planetary physics; cosmological and astrophysical simulations; star formation (HP) 3
4 DiRAC - History 2016 DiRAC-2.5 Edinburgh: transfer of the Bluegene-Q system from the Hartree Centre at Daresbury as spare parts ( 30k) Durham (ICC) COSMA6 - repurposing of the Blue Wonder cluster (114 th in Top 500, June 2012) gift from the Hartree Centre and rebuilt in partnership with OCF and DDN and a lot of willing hands from the ICC adding 8000 cores to the existing system: total of cores ( 400k) Cambridge new installation (Dell): 13% of CSD3 (768 nodes; 24,576 cores, Intel Omnipath), CSD3-GPU (360 nodes; 1440 GPUs, Mellanox EDR) and CSD3-KNL (342 nodes; 342 KNL, Intel Omnipath). Leicester Complexity repurposing local HPC cluster to add 3000 Intel SandyBridge cores: to a total of
5 DiRAC-2.5x Autumn 2017 Total spend: 9M - 3 competitive tenders. Edinburgh to replace ailing Bluegene Q Extreme Scaling Durham to add a system with 100 Tbyte of RAM and fast checkpointing i/o system Memory Intensive Leicester to replace the DiRAC-2.5 add-on with modern competitive cores and adding substantial shared memory systems Data Leicester Investigations into Cloud Computing how could Public Cloud benefit DiRAC and how could DiRAC offer cloud Investigations in optimal ways of transferring data 5
6 Edinburgh Extreme Scaling (ES) Delivered and installed by HPE (680 Tflops) The Extreme Scaling Service is hosted by the University of Edinburgh. DiRAC Extreme Scaling (also know as Tesseract) is available to industry, commerce and academic researchers. Intel Xeon Skylake 4116 processors, 844 nodes, 12 cores per socket, two sockets per node, FMA AVX512, 2.2GHz base, 3.0Ghz turbo, 96GB RAM Hypercube Intel Omnipath Interconnect 2.4PB Lustre DDN storage This system is configured for good to excellent strong scaling and vectorised codes and has High Performance I/O and Interconnect. Edinburgh Castle Public Body for Scotland's Historic EnvironmentHistoric Environment Scotland 6
7 Durham - Memory Intensive (MI) COSMA5 (2012): (IBM/Lenovo/DDN) 6,400 Intel SandyBridge, 2.6 GHz, 51 Tbyte of RAM; Mellanox FDR10 interconnect in 2:1 blocking; 2.5 Pbyte of GPFS data storage COSMA6 ( 2016): (IBM/Lenovo/DDN) 8192 Intel SandyBridge, 2.6 GHz, 65 Tbyte of RAM; Mellanox FDR10 interconnect in 2:1 blocking; 2.5 Pbyte of Lustre data storage COSMA7 (Dell, 2018): 4116 Intel Skylake 5120 cores, Mellanox EDR in a 2:1 blocking configuration with islands of 24 nodes; a total of 110 Tbyte of RAM; a fast checkpointing i/o system (343 Tbyte) with peak performance of 185 Gbyte/second write and read; a data storage of 1.8 Pbyte. The system was delivered by Dell 7
8 Cambridge Data Intensive (DiC) DiRAC has a 13% share of the CSD3 petascale HPC platform (Peta4 & Wilkes2), hosted at Cambridge University (Dell) Peta4 The Peta4 system provides 1.5 petaflops of compute capability: 342 C6320p node Intel KNL Cluster (Intel Xeon Phi CPU with 96GB of RAM per node. 768 Skylake nodes each with 2 x Intel Xeon Skylake 6142 processors, 2.6GHz 16-core (32 cores per node 384 nodes with 192 GB memory 384 nodes with 384 GB memory The HPC interconnect is Intel OmniPath in 2:1 Blocking The storage consists of 750 TB of disk storage offering a Lustre parallel filesystem and 750 GB of Tape. Wilkes2 The Wilkes2 system provides 1.19 petaflops of compute capability: 360 NVIDIA GPU cluster with four NVIDIA Tesla P100 GPUs, in 90 Dell EMC server nodes, each with 96GB memory connected by Mellanox EDR Infiniband, providing 1.19 petaflops of computational performance. 13% of 8
9 Leicester Data Intensive Delivered and installed by HPE, 2018 Data Intensive 2.5x (DiL) The DI system has two login nodes, Mellanox EDR interconnect in a 2:1 blocking setup and 3PB Lustre storage. Main Cluster 136 dual-socket nodes with Intel Xeon Skylake 6140, two FMA AVX512, 2.3GHz; 36 cores, 192 GB RAM cores in total. Large-Memory 1 x 6TB server with 144 cores X6154@ 3.0GHz base 3 x 1.5TB server with 36 cores X6140@ 2.3GHz base The DI System at Leicester is designed to offer fast, responsive I/O. Data Intensive 2 (formerly Complexity ) 272 Intel Xeon Sandybridge nodes with 128 GB RAM per node, 4352 cores (95Tflop/s) connected via non-blocking Mellanox FDR interconnect. This cluster features an innovative Switching architecture designed, built and delivered by Leicester University and Hewlett Packard. The total storage available to both systems is in excess of 1PB. 9
10 Durham MI COSMA7 (Dell, 2018) 4116 Intel Skylake 5120 cores, Mellanox EDR in a 2:1 blocking configuration with islands of 24 nodes; 2 x 1.5 Tbyte login nodes 1 x 3 Tbyte 4 socket for Durham Astrophysics a total of 110 Tbyte of RAM; a fast checkpointing i/o system (343 Tbyte) with peak performance of 185 Gbyte/second write and read; a data storage of 1.8 Pbyte. 10
11 Durham MI The system was delivered by Dell in March 2018; Installed by Alces The DiRAC service started on 1 May 2018 Industrial engagement: Aligns closely with the Industrial Strategy of the Department for Business, Energy and Industrial Strategy (BEIS). funding leads to industrial engagement; this results in innovation, both leading to benefits academia and the wider industry. 11
12 How is MI different? a fast checkpointing i/o system (343 Tbyte) with peak performance of 185 Gbyte/second write and read; 15 Lustre Object Storage Servers on Dell 640 nodes each with 2 x Intel Skylake 5120 processors 192 Gbyte of RAM 8 x 3.2 TB NVMe SFF drives. 1 Mellanox EDR card A user code benchmark produced 180 Gbyte/second write and read This is almost wire-speed! This is currently the fastest filesystem in production in Europe 12
13 Kilo Watt hours over 5 years How is DiRAC@Durham MI different? Power usage for snapshots with different performance solutions 3,000,000 2,500,000 2,000,000 1,500,000 1,000, , hour 12-hour 6-hour 4-hour 2-hour 1-hour Axis Title 30GB/sec 120GB/sec 140GB/sec 13
14 How is MI different? Snapshot period 24-hour 12-hour 6-hour 4-hour 2-hour 1-hour hours/snap 30GB/sec 7,087,597 14,175,194 28,350,388 42,525,582 85,051, ,102, GB/sec 1,771,899 3,543,799 7,087,597 10,631,396 21,262,791 42,525, GB/sec 1,518,771 3,037,542 6,075,083 9,112,625 18,225,250 36,450, total number of cores 4,096 years of running 5 Total number of available cpu hours per year: 36,056,160 14
15 Durham MI industrial engagement Dell partners: Alces integrator and offer for a CDT placement Intel collaboration on proofs of concept; extension of IPCC at Durham ICC; Mellanox student placement to optimize SWIFT for Mellanox interconnect; optimized openmpi for Mellanox infrastructure; membership of Centre of Excellence; Nvidia a 2 day Nvidia Hackerton in September More involvement as the partnerships are building 15
16 Science on MI The system has been designed to allow for effective large scale cosmological structure calculations EAGLE likes fewer cores and more RAM per node. SWIFT should not really mind, however in order to do the detailed simulations lots of RAM is required. In both cases there are long run times. The Universe is close to 14 billion years old! Science aims: A run about 30 times bigger than the EAGLE run: The EAGLE run parameters: 1500^3 dark matter mass particles = 3.375* 10^9 1500^3 (= * 10^9 ) baryonic (visible) matter particles Volume of the Eagle run: 100 Mega Parsec (1 parsec = 3.2 light years ) 10,000 particles per Milky Way galaxy 20 TB of RAM 16
17 Science on MI Visible Matter Astrophysics: Galaxy formation, super nova, star formation 10^6 solar masses per mass particle; 10,000 mass particles per galaxy which is an effective resolution of 1000 parsec (or of the order of 3000 light years) Super Nova events create a lot of energy. Using previous methods this energy would be distributed to heat every particle. In EAGLE, this energy would heat a tiny fraction to 10^7 K with a hundred neighbours. The cooling rate would be low. To do this better the resolution has to improve by a factor of 10 at least to a resolution of 100 light years. It would be even better to go down to 1 light year. The stars in the milky way have a total of 5*10^10 solar masses. At a resolution of 10^6 solar masses/per mass particle this means that all stars are modelled by only about mass particles. (There are 250 +/- 150 x 10^9 stars in the galaxy). Gas in the present galaxy simulation is very limited features. The real galaxy has a complex intergalactic medium with dense cold gas clouds, warmer and diffuse hot gas and ionized gas. In more finer grain calculations, these could be modelled much more realistically. The first galaxies were are of the order 100 light years across. With the current resolution, these cannot be modelled. Our Milky Way is about 50,000 light years across. 17
18 Science on MI Volume of the visible universe of the order of 10^31 cubic lightyears; the linear distance is of the order of 3x 10^10 light years. A run about 30 times bigger than the EAGLE run to investigate Verifying Einstein s Gravity laws scale out in volume keeping the same resolution. The evolution of the structure of the universe can calculated without a significant influence of the baryonic physics (less than 1%). This is only possible on larger scales where gravity of dark matter dominates the baryonic physics. The EAGLE run model a volume of about 10^25 lightyears, which is 1/1,000,000 of the real universe The largest objects have not yet been found/modelled. 18
19 19
Weather Research and Forecasting (WRF) Performance Benchmark and Profiling. July 2012
Weather Research and Forecasting (WRF) Performance Benchmark and Profiling July 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell,
More informationPiz Daint & Piz Kesch : from general purpose supercomputing to an appliance for weather forecasting. Thomas C. Schulthess
Piz Daint & Piz Kesch : from general purpose supercomputing to an appliance for weather forecasting Thomas C. Schulthess 1 Cray XC30 with 5272 hybrid, GPU accelerated compute nodes Piz Daint Compute node:
More informationHigh-Performance Computing and Groundbreaking Applications
INSTITUTE OF INFORMATION AND COMMUNICATION TECHNOLOGIES BULGARIAN ACADEMY OF SCIENCE High-Performance Computing and Groundbreaking Applications Svetozar Margenov Institute of Information and Communication
More informationHYCOM and Navy ESPC Future High Performance Computing Needs. Alan J. Wallcraft. COAPS Short Seminar November 6, 2017
HYCOM and Navy ESPC Future High Performance Computing Needs Alan J. Wallcraft COAPS Short Seminar November 6, 2017 Forecasting Architectural Trends 3 NAVY OPERATIONAL GLOBAL OCEAN PREDICTION Trend is higher
More informationGravitational Wave Data (Centre?)
ARC Centre of Excellence for Gravitational Wave Discovery Gravitational Wave Data (Centre?) Matthew Bailes (Director) Swinburne University of Technology ARC Laureate Fellow OzGrav Director Gravitational
More informationStochastic Modelling of Electron Transport on different HPC architectures
Stochastic Modelling of Electron Transport on different HPC architectures www.hp-see.eu E. Atanassov, T. Gurov, A. Karaivan ova Institute of Information and Communication Technologies Bulgarian Academy
More informationFrom Piz Daint to Piz Kesch : the making of a GPU-based weather forecasting system. Oliver Fuhrer and Thomas C. Schulthess
From Piz Daint to Piz Kesch : the making of a GPU-based weather forecasting system Oliver Fuhrer and Thomas C. Schulthess 1 Piz Daint Cray XC30 with 5272 hybrid, GPU accelerated compute nodes Compute node:
More informationMarla Meehl Manager of NCAR/UCAR Networking and Front Range GigaPoP (FRGP)
Big Data at the National Center for Atmospheric Research (NCAR) & expanding network bandwidth to NCAR over Pacific Wave and Western Regional Network (WRN) Marla Meehl Manager of NCAR/UCAR Networking and
More informationRed Sky. Pushing Toward Petascale with Commodity Systems. Matthew Bohnsack. Sandia National Laboratories Albuquerque, New Mexico USA
Red Sky Pushing Toward Petascale with Commodity Systems Matthew Bohnsack Sandia National Laboratories Albuquerque, New Mexico USA mpbohns@sandia.gov Tuesday March 9, 2010 Matthew Bohnsack (Sandia Nat l
More informationQuantum ESPRESSO Performance Benchmark and Profiling. February 2017
Quantum ESPRESSO Performance Benchmark and Profiling February 2017 2 Note The following research was performed under the HPC Advisory Council activities Compute resource - HPC Advisory Council Cluster
More informationNuclear Physics and Computing: Exascale Partnerships. Juan Meza Senior Scientist Lawrence Berkeley National Laboratory
Nuclear Physics and Computing: Exascale Partnerships Juan Meza Senior Scientist Lawrence Berkeley National Laboratory Nuclear Science and Exascale i Workshop held in DC to identify scientific challenges
More informationUpdate on Cray Earth Sciences Segment Activities and Roadmap
Update on Cray Earth Sciences Segment Activities and Roadmap 31 Oct 2006 12 th ECMWF Workshop on Use of HPC in Meteorology Per Nyberg Director, Marketing and Business Development Earth Sciences Segment
More informationWeather and Climate Modeling on GPU and Xeon Phi Accelerated Systems
Weather and Climate Modeling on GPU and Xeon Phi Accelerated Systems Mike Ashworth, Rupert Ford, Graham Riley, Stephen Pickles Scientific Computing Department & STFC Hartree Centre STFC Daresbury Laboratory
More informationData Intensive Computing meets High Performance Computing
Data Intensive Computing meets High Performance Computing Kathy Yelick Associate Laboratory Director for Computing Sciences, Lawrence Berkeley National Laboratory Professor of Electrical Engineering and
More informationProgress in NWP on Intel HPC architecture at Australian Bureau of Meteorology
Progress in NWP on Intel HPC architecture at Australian Bureau of Meteorology www.cawcr.gov.au Robin Bowen Senior ITO Earth System Modelling Programme 04 October 2012 ECMWF HPC Presentation outline Weather
More informationRadio astronomy in Africa: Opportunities for cooperation with Europe within the context of the African-European Radio Astronomy Platform (AERAP)
Radio astronomy in Africa: Opportunities for cooperation with Europe within the context of the African-European Radio Astronomy Platform (AERAP) Square Kilometre Array (SKA) Global mega-science project
More informationCactus Tools for Petascale Computing
Cactus Tools for Petascale Computing Erik Schnetter Reno, November 2007 Gamma Ray Bursts ~10 7 km He Protoneutron Star Accretion Collapse to a Black Hole Jet Formation and Sustainment Fe-group nuclei Si
More informationSKA Industry Update. Matthew Johnson Head of the UK SKA Project Office STFC
SKA Industry Update Matthew Johnson Head of the UK SKA Project Office STFC Square Kilometre Array Rebaselining announced in March 2015 UK selected as HQ host in April 2015 Business case approved by Jo
More informationA Data Communication Reliability and Trustability Study for Cluster Computing
A Data Communication Reliability and Trustability Study for Cluster Computing Speaker: Eduardo Colmenares Midwestern State University Wichita Falls, TX HPC Introduction Relevant to a variety of sciences,
More informationThe Hartree Centre A Research Collaboration in Association with IBM. Professor Terry Hewitt Project Delivery Executive
The Hartree Centre A Research Collaboration in Association with IBM Professor Terry Hewitt Project Delivery Executive (terry.hewitt@stfc.ac.uk) Outline What s so good about HPC? STFC SCD Hartree Centre
More informationReport on the INAF-CINECA agreement
Mem. S.A.It. Suppl. Vol. 13, 18 c SAIt 2009 Memorie della Supplementi Report on the INAF-CINECA agreement G. Bodo Istituto Nazionale di Astrofisica Osservatorio Astronomico di Torino, Via Osservatorio
More informationThe Millennium Simulation: cosmic evolution in a supercomputer. Simon White Max Planck Institute for Astrophysics
The Millennium Simulation: cosmic evolution in a supercomputer Simon White Max Planck Institute for Astrophysics The COBE satellite (1989-1993) Two instruments made maps of the whole sky in microwaves
More informationSupercomputer Programme
Supercomputer Programme A seven-year programme to enhance the computational and numerical prediction capabilities of the Bureau s forecast and warning services. Tim Pugh, Lesley Seebeck, Tennessee Leeuwenburg,
More informationAstronomical Computer Simulations. Aaron Smith
Astronomical Computer Simulations Aaron Smith 1 1. The Purpose and History of Astronomical Computer Simulations 2. Algorithms 3. Systems/Architectures 4. Simulation/Projects 2 The Purpose of Astronomical
More informationThe Square Kilometre Array and Data Intensive Radio Astronomy 1
The Square Kilometre Array and Data Intensive Radio Astronomy 1 Erik Rosolowsky (U. Alberta) Bryan Gaensler (U. Toronto, Canadian SKA Science Director) Stefi Baum (U. Manitoba) Kristine Spekkens (RMC/Queen
More informationPerformance Evaluation of MPI on Weather and Hydrological Models
NCAR/RAL Performance Evaluation of MPI on Weather and Hydrological Models Alessandro Fanfarillo elfanfa@ucar.edu August 8th 2018 Cheyenne - NCAR Supercomputer Cheyenne is a 5.34-petaflops, high-performance
More informationJulian Merten. GPU Computing and Alternative Architecture
Future Directions of Cosmological Simulations / Edinburgh 1 / 16 Julian Merten GPU Computing and Alternative Architecture Institut für Theoretische Astrophysik Zentrum für Astronomie Universität Heidelberg
More informationPerformance of Met Office Weather and Climate Codes on Cavium ThunderX2 Processors. Adam Voysey, Maff Glover HPC Optimisation Team
Performance of Met Office Weather and Climate Codes on Cavium ThunderX2 Processors Adam Voysey, Maff Glover HPC Optimisation Team Contents Introduction The Met Office and why we use HPC UM and NEMO Results
More informationSome thoughts about energy efficient application execution on NEC LX Series compute clusters
Some thoughts about energy efficient application execution on NEC LX Series compute clusters G. Wellein, G. Hager, J. Treibig, M. Wittmann Erlangen Regional Computing Center & Department of Computer Science
More informationReliability at Scale
Reliability at Scale Intelligent Storage Workshop 5 James Nunez Los Alamos National lab LA-UR-07-0828 & LA-UR-06-0397 May 15, 2007 A Word about scale Petaflop class machines LLNL Blue Gene 350 Tflops 128k
More informationAPPLICATIONS FOR PHYSICAL SCIENCE
APPLICATIONS FOR PHYSICAL SCIENCE A. Bulgarelli (INAF) NATIONAL INSTITUTE FOR ASTROPHYSICS (INAF) The National Institute for Astrophysics (INAF) is the main Italian Research Institute for the study of
More informationChile / Dirección Meteorológica de Chile (Chilean Weather Service)
JOINT WMO TECHNICAL PROGRESS REPORT ON THE GLOBAL DATA PROCESSING AND FORECASTING SYSTEM AND NUMERICAL WEATHER PREDICTION RESEARCH ACTIVITIES FOR 2015 Chile / Dirección Meteorológica de Chile (Chilean
More informationMulti-GPU Simulations of the Infinite Universe
() Multi-GPU of the Infinite with with G. Rácz, I. Szapudi & L. Dobos Physics of Complex Systems Department Eötvös Loránd University, Budapest June 22, 2018, Budapest, Hungary Outline 1 () 2 () Concordance
More informationMSC HPC Infrastructure Update. Alain St-Denis Canadian Meteorological Centre Meteorological Service of Canada
MSC HPC Infrastructure Update Alain St-Denis Canadian Meteorological Centre Meteorological Service of Canada Outline HPC Infrastructure Overview Supercomputer Configuration Scientific Direction 2 IT Infrastructure
More informationBSMBench: A flexible and scalable HPC benchmark from beyond the standard model physics.
University of Plymouth PEARL Faculty of Science and Engineering https://pearl.plymouth.ac.uk School of Computing, Electronics and Mathematics 2016 BSMBench: A flexible and scalable HPC benchmark from beyond
More informationSimulation Laboratories at JSC
Mitglied der Helmholtz-Gemeinschaft Simulation Laboratories at JSC Paul Gibbon Jülich Supercomputing Centre Jülich Supercomputing Centre Supercomputer operation for Centre FZJ Regional JARA Helmholtz &
More informationRamesh Vellore. CORDEX Team: R. Krishnan, T.P. Sabin, J. Sanjay, Milind Mujumdar, Sandip Ingle, P. Priya, M.V. Rama Rao, and Madhura Kane
Ramesh Vellore CORDEX Team: R. Krishnan, T.P. Sabin, J. Sanjay, Milind Mujumdar, Sandip Ingle, P. Priya, M.V. Rama Rao, and Madhura Kane Centre for Climate Change Research Indian Institute of Tropical
More informationLattice calculations & DiRAC facility
Lattice calculations & DiRAC facility Matthew Wingate DAMTP, University of Cambridge PPAP Community Meeting, 20-21 July 2017 DiRAC Outline Overview Selected physics highlights Flavour physics Muon magnetic
More informationPhysics plans and ILDG usage
Physics plans and ILDG usage in Italy Francesco Di Renzo University of Parma & INFN Parma The MAIN ILDG USERS in Italy are the ROME groups A (by now) well long track of ILDG-based projects mainly within
More informationWRF performance tuning for the Intel Woodcrest Processor
WRF performance tuning for the Intel Woodcrest Processor A. Semenov, T. Kashevarova, P. Mankevich, D. Shkurko, K. Arturov, N. Panov Intel Corp., pr. ak. Lavrentieva 6/1, Novosibirsk, Russia, 630090 {alexander.l.semenov,tamara.p.kashevarova,pavel.v.mankevich,
More informationHigh-Performance Scientific Computing
High-Performance Scientific Computing Instructor: Randy LeVeque TA: Grady Lemoine Applied Mathematics 483/583, Spring 2011 http://www.amath.washington.edu/~rjl/am583 World s fastest computers http://top500.org
More informationAtomistic Simulation of Nuclear Materials
BEAR Launch 2013 24 th June 2013 Atomistic Simulation of Nuclear Materials Dr Mark S D Read School of Chemistry Nuclear Education and Research Centre www.chem.bham.ac.uk Birmingham Centre for Nuclear Education
More informationGPU Computing Activities in KISTI
International Advanced Research Workshop on High Performance Computing, Grids and Clouds 2010 June 21~June 25 2010, Cetraro, Italy HPC Infrastructure and GPU Computing Activities in KISTI Hongsuk Yi hsyi@kisti.re.kr
More informationClaude Tadonki. MINES ParisTech PSL Research University Centre de Recherche Informatique
Claude Tadonki MINES ParisTech PSL Research University Centre de Recherche Informatique claude.tadonki@mines-paristech.fr Monthly CRI Seminar MINES ParisTech - CRI June 06, 2016, Fontainebleau (France)
More informationHellenic National Meteorological Service (HNMS) GREECE
WWW TECHNICAL PROGRESS REPORT ON THE GLOBAL DATA- PROCESSING AND FORECASTING SYSTEM (GDPFS), AND THE ANNUAL NUMERICAL WEATHER PREDICTION (NWP) PROGRESS REPORT FOR THE YEAR 2005 Hellenic National Meteorological
More informationAn Overview of HPC at the Met Office
An Overview of HPC at the Met Office Paul Selwood Crown copyright 2006 Page 1 Introduction The Met Office National Weather Service for the UK Climate Prediction (Hadley Centre) Operational and Research
More informationGPU-accelerated Computing at Scale. Dirk Pleiter I GTC Europe 10 October 2018
GPU-accelerated Computing at Scale irk Pleiter I GTC Europe 10 October 2018 Outline Supercomputers at JSC Future science challenges Outlook and conclusions 2 3 Supercomputers at JSC JUQUEEN (until 2018)
More informationNew Zealand Impacts on CSP and SDP Designs
New Zealand Impacts on CSP and SDP Designs Dr Andrew Ensor Director HPC Research Laboratory/ Director NZ SKA Alliance Mid-Year Mini SKA Colloquium 2018 12 July 2018 Past, Present, Future Mega-Science Projects
More informationVerbundprojekt ELPA-AEO. Eigenwert-Löser für Petaflop-Anwendungen Algorithmische Erweiterungen und Optimierungen
Verbundprojekt ELPA-AEO http://elpa-aeo.mpcdf.mpg.de Eigenwert-Löser für Petaflop-Anwendungen Algorithmische Erweiterungen und Optimierungen BMBF Projekt 01IH15001 Feb 2016 - Jan 2019 7. HPC-Statustagung,
More informationThe Square Kilometre Array Radio Telescope Project : An Overview
Science with the SKA IISER Mohali 19th March The Square Kilometre Array Radio Telescope Project : An Overview Yashwant Gupta NCRA-TIFR Background : what is the SKA? The SKA is the most ambitious Radio
More informationNASA's Kepler telescope uncovers a treasure trove of planets
NASA's Kepler telescope uncovers a treasure trove of planets By Los Angeles Times, adapted by Newsela on 03.04.14 Word Count 711 The Kepler Mission is specifically designed to survey a portion of our region
More informationScience Operations with the Square Kilometre Array
Science Operations with the Square Kilometre Array Dr Antonio Chrysostomou Head of Science Operations Planning a.chrysostomou@skatelescope.org Outline Introduction to the SKA Science Programme Operational
More informationQuantum Chemical Calculations by Parallel Computer from Commodity PC Components
Nonlinear Analysis: Modelling and Control, 2007, Vol. 12, No. 4, 461 468 Quantum Chemical Calculations by Parallel Computer from Commodity PC Components S. Bekešienė 1, S. Sėrikovienė 2 1 Institute of
More informationDeutscher Wetterdienst
Deutscher Wetterdienst The Enhanced DWD-RAPS Suite Testing Computers, Compilers and More? Ulrich Schättler, Florian Prill, Harald Anlauf Deutscher Wetterdienst Research and Development Deutscher Wetterdienst
More informationRSC Analytical Division Strategy
RSC Analytical Division Strategy 2013-2017 The Analytical Division aims to promote and support analytical chemistry at all levels from public outreach and school education to the most innovative and cutting
More informationECMWF Computing & Forecasting System
ECMWF Computing & Forecasting System icas 2015, Annecy, Sept 2015 Isabella Weger, Deputy Director of Computing ECMWF September 17, 2015 October 29, 2014 ATMOSPHERE MONITORING SERVICE CLIMATE CHANGE SERVICE
More informationOne Optimized I/O Configuration per HPC Application
One Optimized I/O Configuration per HPC Application Leveraging I/O Configurability of Amazon EC2 Cloud Mingliang Liu, Jidong Zhai, Yan Zhai Tsinghua University Xiaosong Ma North Carolina State University
More informationDalton Cumbrian Facility. A state-of-the-art national user facility for nuclear research and development.
Dalton Cumbrian Facility A state-of-the-art national user facility for nuclear research and development Dalton Cumbrian Facility The University of Manchester has established the world-leading Dalton Cumbrian
More informationBig-Data as a Challenge for Astrophysics
Big-Data as a Challenge for Astrophysics ESA / Euclid Consortium Hubble deep field, NASA/STScI HESS collaboration, F. Acero and H. Gast ESA/ATG medialab; ESO/S. Brunier Volker Beckmann Institut National
More informationSPECIAL PROJECT PROGRESS REPORT
SPECIAL PROJECT PROGRESS REPORT Progress Reports should be 2 to 10 pages in length, depending on importance of the project. All the following mandatory information needs to be provided. Reporting year
More informationGRAPE and Project Milkyway. Jun Makino. University of Tokyo
GRAPE and Project Milkyway Jun Makino University of Tokyo Talk overview GRAPE Project Science with GRAPEs Next Generation GRAPE the GRAPE-DR Project Milkyway GRAPE project GOAL: Design and build specialized
More informationA Quantum Chemistry Domain-Specific Language for Heterogeneous Clusters
A Quantum Chemistry Domain-Specific Language for Heterogeneous Clusters ANTONINO TUMEO, ORESTE VILLA Collaborators: Karol Kowalski, Sriram Krishnamoorthy, Wenjing Ma, Simone Secchi May 15, 2012 1 Outline!
More informationPART 3 Galaxies. Gas, Stars and stellar motion in the Milky Way
PART 3 Galaxies Gas, Stars and stellar motion in the Milky Way The Interstellar Medium The Sombrero Galaxy Space is far from empty! Clouds of cold gas Clouds of dust In a galaxy, gravity pulls the dust
More informationSwift: task-based hydrodynamics at Durham s IPCC. Bower
Swift: task-based hydrodynamics at Durham s IPCC Gonnet Schaller Chalk movie: Richard Bower (Durham) For the cosmological simulations of the formation of galaxies Bower Institute for Computational Cosmology
More informationJake Diebolt, GIS Technician/Coordinator
Jake Diebolt, GIS Technician/Coordinator Population: 13 090 (2006 Census) 7 Organized Municipalities, two Incorporated Towns and two unorganized Townships. Population doubles during the summer tourist
More informationMPI at MPI. Jens Saak. Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory
MAX PLANCK INSTITUTE November 5, 2010 MPI at MPI Jens Saak Max Planck Institute for Dynamics of Complex Technical Systems Computational Methods in Systems and Control Theory FOR DYNAMICS OF COMPLEX TECHNICAL
More informationLife Cycle of a Star - Activities
Name: Class Period: Life Cycle of a Star - Activities A STAR IS BORN STAGES COMMON TO ALL STARS All stars start as a nebula. A nebula is a large cloud of gas and dust. Gravity can pull some of the gas
More informationMassively scalable computing method to tackle large eigenvalue problems for nanoelectronics modeling
2019 Intel extreme Performance Users Group (IXPUG) meeting Massively scalable computing method to tackle large eigenvalue problems for nanoelectronics modeling Hoon Ryu, Ph.D. (E: elec1020@kisti.re.kr)
More informationPerformance evaluation of scalable optoelectronics application on large-scale Knights Landing cluster
Performance evaluation of scalable optoelectronics application on large-scale Knights Landing cluster Yuta Hirokawa Graduate School of Systems and Information Engineering, University of Tsukuba hirokawa@hpcs.cs.tsukuba.ac.jp
More informationCOSMOLOGICAL SIMULATIONS OF THE UNIVERSE AND THE COMPUTATIONAL CHALLENGES. Gustavo Yepes Universidad Autónoma de Madrid
COSMOLOGICAL SIMULATIONS OF THE UNIVERSE AND THE COMPUTATIONAL CHALLENGES Gustavo Yepes Universidad Autónoma de Madrid Numerical simulations in Cosmology: what are they useful for? They help to understand
More informationDEUS Full Observable ΛCDM Universe Simulation: the numerical challenge
DEUS Full Observable ΛCDM Universe Simulation: the numerical challenge arxiv:1206.2838v1 [astro-ph.co] 13 Jun 2012 Jean-Michel Alimi, Vincent Bouillot Yann Rasera, Vincent Reverdy, Pier-Stefano Corasaniti
More informationHarvard Center for Geographic Analysis Geospatial on the MOC
2017 Massachusetts Open Cloud Workshop Boston University Harvard Center for Geographic Analysis Geospatial on the MOC Ben Lewis Harvard Center for Geographic Analysis Aaron Williams MapD Small Team Supporting
More informationOn Portability, Performance and Scalability of a MPI OpenCL Lattice Boltzmann Code
On Portability, Performance and Scalability of a MPI OpenCL Lattice Boltzmann Code E Calore, S F Schifano, R Tripiccione Enrico Calore INFN Ferrara, Italy 7 th Workshop on UnConventional High Performance
More informationMicrolensing Studies in Crowded Fields. Craig Mackay, Institute of Astronomy, University of Cambridge.
Microlensing Studies in Crowded Fields Craig Mackay, Institute of Astronomy, University of Cambridge. Introduction and Outline Will start by summarising the constraints we must work with in order to detect
More informationProf. Brant Robertson Department of Astronomy and Astrophysics University of California, Santa
Accelerated Astrophysics: Using NVIDIA GPUs to Simulate and Understand the Universe Prof. Brant Robertson Department of Astronomy and Astrophysics University of California, Santa Cruz brant@ucsc.edu, UC
More informationTowards a City Model for Heritage-Led Regeneration and Tourism Development
Towards a City Model for Heritage-Led Regeneration and Tourism Development Brian Smith, Secretary General European Association of Historic Towns and Regions Summary of Presentation Objective Background
More informationOverview of the Square Kilometre Array. Richard Schilizzi COST Workshop, Rome, 30 March 2010
Overview of the Square Kilometre Array Richard Schilizzi COST Workshop, Rome, 30 March 2010 The Square Kilometre Array A global program Time line 2000-07 Initial Concept Stage 2008-12 System Design Stage
More informationScaling the Software and Advancing the Science of Global Modeling and Assimilation Systems at NASA. Bill Putman
Global Modeling and Assimilation Office Scaling the Software and Advancing the Science of Global Modeling and Assimilation Systems at NASA Bill Putman Max Suarez, Lawrence Takacs, Atanas Trayanov and Hamid
More informationNear-real-time satellite data processing at NIWA with Cylc
Near-real-time satellite data processing at NIWA with Cylc CSPP/IMAPP Users Group Meeting Simon Wood, NIWA SSEC, Madison, Wisconsin June 27-29 2017 Outline About NIWA who we are and what we do NIWA's Direct
More informationInvestigating Solar Power in Different Weather Conditions.
Investigating Solar Power in Different Weather Conditions. Pam Dugdale, Cronton 6 th Form College. Introduction Advertisements for photovoltaic (PV) solar panels are everywhere, with those selling these
More informationMIT Exploring Black Holes
THE UNIVERSE and Three Examples Alan Guth, MIT MIT 8.224 Exploring Black Holes EINSTEIN'S CONTRIBUTIONS March, 1916: The Foundation of the General Theory of Relativity Feb, 1917: Cosmological Considerations
More informationToward models of light relativistic jets interacting with an inhomogeneous ISM
Toward models of light relativistic jets interacting with an inhomogeneous ISM Alexander Wagner Geoffrey Bicknell Ralph Sutherland (Research School of Astronomy and Astrophysics) 1 Outline Introduction
More informationThe Milky Way Galaxy
The Milky Way Galaxy A. Expert - I have done a lot of reading in this area already. B. Above Average - I have learned some information about this topic. C. Moderate - I know a little about this topic.
More informationComputational Physics Computerphysik
Computational Physics Computerphysik Rainer Spurzem, Astronomisches Rechen-Institut Zentrum für Astronomie, Universität Heidelberg Ralf Klessen, Institut f. Theoretische Astrophysik, Zentrum für Astronomie,
More informationDirected Reading A. Section: The Life Cycle of Stars TYPES OF STARS THE LIFE CYCLE OF SUNLIKE STARS A TOOL FOR STUDYING STARS.
Skills Worksheet Directed Reading A Section: The Life Cycle of Stars TYPES OF STARS (pp. 444 449) 1. Besides by mass, size, brightness, color, temperature, and composition, how are stars classified? a.
More informationGraspIT Questions AQA GCSE Physics Space physics
A. Solar system: stability of orbital motions; satellites (physics only) 1. Put these astronomical objects in order of size from largest to smallest. (3) Fill in the boxes in the correct order. the Moon
More informationEarth in Space. Stars, Galaxies, and the Universe
Earth in Space Stars, Galaxies, and the Universe Key Concepts What are stars? How does the Sun compare to other stars? Where is Earth located in the universe? How is the universe structured? What do you
More informationStars, Galaxies & the Universe Lecture Outline
Stars, Galaxies & the Universe Lecture Outline A galaxy is a collection of 100 billion stars! Our Milky Way Galaxy (1)Components - HII regions, Dust Nebulae, Atomic Gas (2) Shape & Size (3) Rotation of
More informationParallel Eigensolver Performance on High Performance Computers 1
Parallel Eigensolver Performance on High Performance Computers 1 Andrew Sunderland STFC Daresbury Laboratory, Warrington, UK Abstract Eigenvalue and eigenvector computations arise in a wide range of scientific
More informationDirect Self-Consistent Field Computations on GPU Clusters
Direct Self-Consistent Field Computations on GPU Clusters Guochun Shi, Volodymyr Kindratenko National Center for Supercomputing Applications University of Illinois at UrbanaChampaign Ivan Ufimtsev, Todd
More informationListening for thunder beyond the clouds
Listening for thunder beyond the clouds Using the grid to analyse gravitational wave data Ra Inta The Australian National University Overview 1. Gravitational wave (GW) observatories 2. Analysis of continuous
More informationLESSON 1. Solar System
Astronomy Notes LESSON 1 Solar System 11.1 Structure of the Solar System axis of rotation period of rotation period of revolution ellipse astronomical unit What is the solar system? 11.1 Structure of the
More informationSTARS AND GALAXIES STARS
STARS AND GALAXIES STARS enormous spheres of plasma formed from strong gravitational forces PLASMA the most energetic state of matter; responsible for the characteristic glow emitted by these heavenly
More informationAdvancing Weather Prediction at NOAA. 18 November 2015 Tom Henderson NOAA / ESRL / GSD
Advancing Weather Prediction at NOAA 18 November 2015 Tom Henderson NOAA / ESRL / GSD The U. S. Needs Better Global Numerical Weather Prediction Hurricane Sandy October 28, 2012 A European forecast that
More informationThe Square Kilometre Array
The Square Kilometre Array An example of international cooperation a.chrysostomou@skatelescope.org Antonio Chrysostomou Head of Science Operations Planning SKA Key Science Drivers: The history of the Universe
More informationBig Bang, Big Iron: CMB Data Analysis at the Petascale and Beyond
Big Bang, Big Iron: CMB Data Analysis at the Petascale and Beyond Julian Borrill Computational Cosmology Center, LBL & Space Sciences Laboratory, UCB with Christopher Cantalupo, Theodore Kisner, Radek
More informationPerm State University Research-Education Center Parallel and Distributed Computing
Perm State University Research-Education Center Parallel and Distributed Computing A 25-minute Talk (S4493) at the GPU Technology Conference (GTC) 2014 MARCH 24-27, 2014 SAN JOSE, CA GPU-accelerated modeling
More informationMAJOR SCIENTIFIC INSTRUMENTATION
MAJOR SCIENTIFIC INSTRUMENTATION APPLICATION OF OPERATING RESOURCES FEDERAL APPROPRIATIONS GENERAL TRUST DONOR/SPONSOR DESIGNATED GOV T GRANTS & CONTRACTS FY 2006 ACTUAL FY 2007 ESTIMATE FY 2008 ESTIMATE
More informationHeidi B. Hammel. AURA Executive Vice President. Presented to the NRC OIR System Committee 13 October 2014
Heidi B. Hammel AURA Executive Vice President Presented to the NRC OIR System Committee 13 October 2014 AURA basics Non-profit started in 1957 as a consortium of universities established to manage public
More informationWhat is the solar system?
Notes Astronomy What is the solar system? 11.1 Structure of the Solar System Our solar system includes planets and dwarf planets, their moons, a star called the Sun, asteroids and comets. Planets, dwarf
More information