Physics plans and ILDG usage
|
|
- Rosalind Johnston
- 6 years ago
- Views:
Transcription
1 Physics plans and ILDG usage in Italy Francesco Di Renzo University of Parma & INFN Parma
2 The MAIN ILDG USERS in Italy are the ROME groups A (by now) well long track of ILDG-based projects mainly within ETMC Collaboration. A long-standing course of honour (I only picked out a couple of snapshots) in n f =2 tmlqcd Physics. Current work: Clover-improved n f =2+1+1 tmlqcd regularitzation at nearly physical pion mass (a fm) This in turn asks for n f =4 RI-MOM renormalization factors (plus some quark-mass parameter tuning work) This will result in some *96 and some thousands of 24 3 *48 configurations plus maybe some hundreds (48 3 *96) propagators stored on ILDG.
3 This of course does not exhaus LQCD research in Italy but most of other research is outside ILDG core-business (finite temperature, confinement, topology ) it is nevertheless useful to point out that some other ILDG-based work is coming up ILDG parvenues in Italy Some Parma group work: finite-size effects in RI-MOM This is NSPT work, i.e. Perturbation Theory: finite size effects (above) prevent a collapse of different volumes data (black and red) points on smooths curves. A similar analysis can be performed in the nonperturbative framework! QCDSF ILDG n f =2 configurations (different volumes) can be used (and maybe some other stuff from STRONGnet collaborators of ours )
4 Computing facilities: good news! Home / Lists / November 2012 Top500 List - November 2012 R max and R peak values are in TFlops. For more details about other fields, check the TOP500 description. previous next Rank Site System Cores 1 DOE/SC/Oak Ridge National Laboratory 2 DOE/NNSA/LLNL Titan - Cray XK7, Opteron C 2.200GHz, Cray Gemini interconnect, NVIDIA K20x Cray Inc. Rmax Rpeak Power (TFlop/s) (TFlop/s) (kw) Sequoia - BlueGene/Q, Power BQC 16C GHz, Custom A BlueGene/Q system was installed at CINECA (Bologna) one year ago. CINECA is the major computing consortium in Italy (a TIER-0 site within PRACE!) 3 RIKEN Advanced Institute for Computational Science (AICS) Japan 4 DOE/SC/Argonne National Laboratory K computer, SPARC64 VIIIfx 2.0GHz, Tofu interconnect Fujitsu Mira - BlueGene/Q, Power BQC 16C 1.60GHz, Custom There is an INFN-CINECA agreement which gives us access to some BG/Q computing time. 5 Forschungszentrum Juelich (FZJ) Germany JUQUEEN - BlueGene/Q, Power BQC 16C 1.600GHz, Custom Interconnect Most of Fermi s computing power goes into PRACE allocations (so, many of you could be involved!) 6 Leibniz Rechenzentrum Germany SuperMUC - idataplex DX360M4, Xeon E C 2.70GHz, Infiniband FDR Texas Advanced Computing Center/Univ. of Texas Stampede - PowerEdge C8220, Xeon E C 2.700GHz, Infiniband FDR, Intel Xeon Phi Dell 8 National Supercomputing Center in Tianjin Tianhe-1A - NUDT YH MPP, Xeon X5670 6C 2.93 GHz, NVIDIA China NUDT 9 CINECA Fermi - BlueGene/Q, Power BQC 16C Italy 1.60GHz, Custom 10 Development Engineering DARPA Trial Subset - Power 775, POWER7 8C 3.836GHz, Custom Interconnect CEA/TGCC-GENCI Curie thin nodes - Bullx B510, Xeon E
5 Francesco Di Renzo Computing facilities: good news! (2) INFN got money from the Research Ministry (MIUR) for the SUMA project SUMA plans to support computational physics goals, and at the same time aims to explore all suitable ways in which the technological developments made at INFN can be put to good use for the present and future needs of computational physics. SUMA groups all the INFN groups active in Lattice QCD Main task for us is of course to look for (medium- and long-time) strategies to optimize LQCD codes on modern (and future) computer architectures. A concrete task: explore LQCD performances on (more or less) prototypal systems EURORA: GPU and/or MIC based system With many respects, a follow-up of AURORA QUONG: GPU based system Designed by the Rome APE-group 19th ILDG workshop
Supercomputing: Why, What, and Where (are we)?
Supercomputing: Why, What, and Where (are we)? R. Govindarajan Indian Institute of Science, Bangalore, INDIA govind@serc.iisc.ernet.in (C)RG@SERC,IISc Why Supercomputer? Third and Fourth Legs RG@SERC,IISc
More informationON THE FUTURE OF HIGH PERFORMANCE COMPUTING: HOW TO THINK FOR PETA AND EXASCALE COMPUTING
ON THE FUTURE OF HIGH PERFORMANCE COMPUTING: HOW TO THINK FOR PETA AND EXASCALE COMPUTING JACK DONGARRA UNIVERSITY OF TENNESSEE OAK RIDGE NATIONAL LAB What Is LINPACK? LINPACK is a package of mathematical
More informationNuclear Physics and Computing: Exascale Partnerships. Juan Meza Senior Scientist Lawrence Berkeley National Laboratory
Nuclear Physics and Computing: Exascale Partnerships Juan Meza Senior Scientist Lawrence Berkeley National Laboratory Nuclear Science and Exascale i Workshop held in DC to identify scientific challenges
More informationLinear-scaling ab initio study of surface defects in metal oxide and carbon nanostructures
Linear-scaling ab initio study of surface defects in metal oxide and carbon nanostructures Rubén Pérez SPM Theory & Nanomechanics Group Departamento de Física Teórica de la Materia Condensada & Condensed
More informationClaude Tadonki. MINES ParisTech PSL Research University Centre de Recherche Informatique
Claude Tadonki MINES ParisTech PSL Research University Centre de Recherche Informatique claude.tadonki@mines-paristech.fr Monthly CRI Seminar MINES ParisTech - CRI June 06, 2016, Fontainebleau (France)
More informationOn Portability, Performance and Scalability of a MPI OpenCL Lattice Boltzmann Code
On Portability, Performance and Scalability of a MPI OpenCL Lattice Boltzmann Code E Calore, S F Schifano, R Tripiccione Enrico Calore INFN Ferrara, Italy 7 th Workshop on UnConventional High Performance
More informationComputer Architecture. ESE 345 Computer Architecture. Performance and Energy Consumption. CA: Performance and Energy
Computer Architecture ESE 345 Computer Architecture Performance and Energy Consumption 1 Two Notions of Performance Plane Boeing 747 DC to Paris 6.5 hours Top Speed 610 mph Passengers Throughput (pmph)
More informationUse of the Fast Fourier Transform in Solving Partial Differential Equations
Use of the Fast Fourier Transform in Solving Partial Differential Equations B.K. Muite http://kodu.ut.ee/ benson http://en.wikibooks.org/wiki/parallel Spectral Numerical Methods 7 March 2018 Outline Motivation
More informationPerformance and Energy Analysis of the Iterative Solution of Sparse Linear Systems on Multicore and Manycore Architectures
Performance and Energy Analysis of the Iterative Solution of Sparse Linear Systems on Multicore and Manycore Architectures José I. Aliaga Performance and Energy Analysis of the Iterative Solution of Sparse
More informationArchitecture-Aware Algorithms and Software for Peta and Exascale Computing
Architecture-Aware Algorithms and Software for Peta and Exascale Computing Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester 4/25/2011 1 H. Meuer, H. Simon, E.
More informationarxiv: v1 [hep-lat] 31 Oct 2015
and Code Optimization arxiv:1511.00088v1 [hep-lat] 31 Oct 2015 Hwancheol Jeong, Sangbaek Lee, Weonjong Lee, Lattice Gauge Theory Research Center, CTP, and FPRD, Department of Physics and Astronomy, Seoul
More informationLattice Quantum Chromodynamics on the MIC architectures
Lattice Quantum Chromodynamics on the MIC architectures Piotr Korcyl Universität Regensburg Intel MIC Programming Workshop @ LRZ 28 June 2017 Piotr Korcyl Lattice Quantum Chromodynamics on the MIC 1/ 25
More informationPoS(LATTICE 2013)393. K and D oscillations in the Standard Model and its. extensions from N f = Twisted Mass LQCD
K and D oscillations in the Standard Model and its extensions from N f = 2+1+1 Twisted Mass LQCD, V. Giménez Dep. de Física Teòrica and IFIC, Universitat de València-CSIC P. Dimopoulos Centro Fermi - Museo
More informationLattice QCD. Steven Gottlieb, Indiana University. Fermilab Users Group Meeting June 1-2, 2011
Lattice QCD Steven Gottlieb, Indiana University Fermilab Users Group Meeting June 1-2, 2011 Caveats I will borrow (shamelessly). 3 Lattice field theory is very active so there is not enough time to review
More informationPerformance of the fusion code GYRO on three four generations of Crays. Mark Fahey University of Tennessee, Knoxville
Performance of the fusion code GYRO on three four generations of Crays Mark Fahey mfahey@utk.edu University of Tennessee, Knoxville Contents Introduction GYRO Overview Benchmark Problem Test Platforms
More informationLeveraging Task-Parallelism in Energy-Efficient ILU Preconditioners
Leveraging Task-Parallelism in Energy-Efficient ILU Preconditioners José I. Aliaga Leveraging task-parallelism in energy-efficient ILU preconditioners Universidad Jaime I (Castellón, Spain) José I. Aliaga
More informationFundamentals of Computational Science
Fundamentals of Computational Science Dr. Hyrum D. Carroll August 23, 2016 Introductions Each student: Name Undergraduate school & major Masters & major Previous research (if any) Why Computational Science
More informationA Numerical QCD Hello World
A Numerical QCD Hello World Bálint Thomas Jefferson National Accelerator Facility Newport News, VA, USA INT Summer School On Lattice QCD, 2007 What is involved in a Lattice Calculation What is a lattice
More informationHigh-Performance Computing and Groundbreaking Applications
INSTITUTE OF INFORMATION AND COMMUNICATION TECHNOLOGIES BULGARIAN ACADEMY OF SCIENCE High-Performance Computing and Groundbreaking Applications Svetozar Margenov Institute of Information and Communication
More informationDark Energy and Massive Neutrino Universe Covariances
Dark Energy and Massive Neutrino Universe Covariances (DEMNUniCov) Carmelita Carbone Physics Dept, Milan University & INAF-Brera Collaborators: M. Calabrese, M. Zennaro, G. Fabbian, J. Bel November 30
More informationUnraveling the mysteries of quarks with hundreds of GPUs. Ron Babich NVIDIA
Unraveling the mysteries of quarks with hundreds of GPUs Ron Babich NVIDIA Collaborators and QUDA developers Kip Barros (LANL) Rich Brower (Boston University) Mike Clark (NVIDIA) Justin Foley (University
More informationThe Memory Intensive System
DiRAC@Durham The Memory Intensive System The DiRAC-2.5x Memory Intensive system at Durham in partnership with Dell Dr Lydia Heck, Technical Director ICC HPC and DiRAC Technical Manager 1 DiRAC Who we are:
More informationPiz Daint & Piz Kesch : from general purpose supercomputing to an appliance for weather forecasting. Thomas C. Schulthess
Piz Daint & Piz Kesch : from general purpose supercomputing to an appliance for weather forecasting Thomas C. Schulthess 1 Cray XC30 with 5272 hybrid, GPU accelerated compute nodes Piz Daint Compute node:
More informationPoS(LATTICE 2015)261. Scalar and vector form factors of D πlν and D Klν decays with N f = Twisted fermions
Scalar and vector form factors of D πlν and D Klν decays with N f = + + Twisted fermions N. Carrasco (a), (a,b), V. Lubicz (a,b), E. Picca (a,b), L. Riggio (a), S. Simula (a), C. Tarantino (a,b) (a) INFN,
More informationLattice Boltzmann simulations on heterogeneous CPU-GPU clusters
Lattice Boltzmann simulations on heterogeneous CPU-GPU clusters H. Köstler 2nd International Symposium Computer Simulations on GPU Freudenstadt, 29.05.2013 1 Contents Motivation walberla software concepts
More informationSome thoughts about energy efficient application execution on NEC LX Series compute clusters
Some thoughts about energy efficient application execution on NEC LX Series compute clusters G. Wellein, G. Hager, J. Treibig, M. Wittmann Erlangen Regional Computing Center & Department of Computer Science
More informationThe Quest for Solving QCD: Simulating quark and gluon interactions on supercomputers
The Quest for Solving QCD: Simulating quark and gluon interactions on supercomputers Karl Jansen Introduction Status of present Lattice calculations Hadron structure on the lattice Moments of parton distribution
More informationHABILITATION A DIRIGER DES RECHERCHES
Année 2013 HABILITATION A DIRIGER DES RECHERCHES Préparée à la Faculté des Sciences de l Université Paris-Sud d Orsay et à l Ecole des Mines de Paris Centre de Recherche en Informatique Spécialité : Informatique
More informationJulian Merten. GPU Computing and Alternative Architecture
Future Directions of Cosmological Simulations / Edinburgh 1 / 16 Julian Merten GPU Computing and Alternative Architecture Institut für Theoretische Astrophysik Zentrum für Astronomie Universität Heidelberg
More informationTuning And Understanding MILC Performance In Cray XK6 GPU Clusters. Mike Showerman, Guochun Shi Steven Gottlieb
Tuning And Understanding MILC Performance In Cray XK6 GPU Clusters Mike Showerman, Guochun Shi Steven Gottlieb Outline Background Lattice QCD and MILC GPU and Cray XK6 node architecture Implementation
More informationSustained Petascale Performance of Seismic Simulations with SeisSol
SIAM EX Workshop on Exascale Applied Mathematics Challenges and Opportunities Sustained Petascale Performance of Seismic Simulations with SeisSol M. Bader, A. Breuer, A. Heinecke, S. Rettenberger C. Pelties,
More informationExpected precision in future lattice calculations p.1
Expected precision in future lattice calculations Shoji Hashimoto (KEK) shoji.hashimoto@kek.jp Super-B Workshop, at University of Hawaii, Jan 19 22, 2004 Expected precision in future lattice calculations
More informationCURRICULUM VITAE. Dipartimento di Fisica Università di Milano Bicocca Milano - Italy Tel
CURRICULUM VITAE Name: Nationality: Date/place of birth: Address : e-mail: LEONARDO GIUSTI Italian 5th of April 1971, Roma Dipartimento di Fisica Università di Milano Bicocca 20124 Milano - Italy Tel.
More informationPerformance Evaluation of MPI on Weather and Hydrological Models
NCAR/RAL Performance Evaluation of MPI on Weather and Hydrological Models Alessandro Fanfarillo elfanfa@ucar.edu August 8th 2018 Cheyenne - NCAR Supercomputer Cheyenne is a 5.34-petaflops, high-performance
More informationHistory of Scientific Computing!
History of Scientific Computing! Topics to be addressed: Growth of compu5ng power Beginnings of Computa5onal Chemistry History of modern opera5ng system for scien5fic compu5ng: UNIX Current compu5ng power
More informationParallelization of Molecular Dynamics (with focus on Gromacs) SeSE 2014 p.1/29
Parallelization of Molecular Dynamics (with focus on Gromacs) SeSE 2014 p.1/29 Outline A few words on MD applications and the GROMACS package The main work in an MD simulation Parallelization Stream computing
More informationCarbon-12 in Nuclear Lattice EFT
Carbon-12 in Nuclear Lattice EFT Nuclear Lattice EFT Collaboration Evgeny Epelbaum (Bochum) Hermann Krebs (Bochum) Timo A. Lähde (Jülich) Dean Lee (NC State) Thomas Luu (Jülich) Ulf-G. Meißner (Bonn/Jülich)
More informationLecture 19. Architectural Directions
Lecture 19 Architectural Directions Today s lecture Advanced Architectures NUMA Blue Gene 2010 Scott B. Baden / CSE 160 / Winter 2010 2 Final examination Announcements Thursday, March 17, in this room:
More informationNucleon structure from lattice QCD
Nucleon structure from lattice QCD M. Göckeler, P. Hägler, R. Horsley, Y. Nakamura, D. Pleiter, P.E.L. Rakow, A. Schäfer, G. Schierholz, W. Schroers, H. Stüben, Th. Streuer, J.M. Zanotti QCDSF Collaboration
More informationarxiv: v1 [hep-lat] 8 Nov 2014
Staggered Dslash Performance on Intel Xeon Phi Architecture arxiv:1411.2087v1 [hep-lat] 8 Nov 2014 Department of Physics, Indiana University, Bloomington IN 47405, USA E-mail: ruizli AT umail.iu.edu Steven
More informationPerformance evaluation of scalable optoelectronics application on large-scale Knights Landing cluster
Performance evaluation of scalable optoelectronics application on large-scale Knights Landing cluster Yuta Hirokawa Graduate School of Systems and Information Engineering, University of Tsukuba hirokawa@hpcs.cs.tsukuba.ac.jp
More informationSPECIAL PROJECT PROGRESS REPORT
SPECIAL PROJECT PROGRESS REPORT Progress Reports should be 2 to 10 pages in length, depending on importance of the project. All the following mandatory information needs to be provided. Reporting year
More informationCRYSTAL in parallel: replicated and distributed (MPP) data. Why parallel?
CRYSTAL in parallel: replicated and distributed (MPP) data Roberto Orlando Dipartimento di Chimica Università di Torino Via Pietro Giuria 5, 10125 Torino (Italy) roberto.orlando@unito.it 1 Why parallel?
More informationPetascale Quantum Simulations of Nano Systems and Biomolecules
Petascale Quantum Simulations of Nano Systems and Biomolecules Emil Briggs North Carolina State University 1. Outline of real-space Multigrid (RMG) 2. Scalability and hybrid/threaded models 3. GPU acceleration
More informationLattice QCD at non-zero temperature and density
Lattice QCD at non-zero temperature and density Frithjof Karsch Bielefeld University & Brookhaven National Laboratory QCD in a nutshell, non-perturbative physics, lattice-regularized QCD, Monte Carlo simulations
More informationGPU Computing Activities in KISTI
International Advanced Research Workshop on High Performance Computing, Grids and Clouds 2010 June 21~June 25 2010, Cetraro, Italy HPC Infrastructure and GPU Computing Activities in KISTI Hongsuk Yi hsyi@kisti.re.kr
More informationMeasuring freeze-out parameters on the Bielefeld GPU cluster
Measuring freeze-out parameters on the Bielefeld GPU cluster Outline Fluctuations and the QCD phase diagram Fluctuations from Lattice QCD The Bielefeld hybrid GPU cluster Freeze-out conditions from QCD
More informationParallel Simulations of Self-propelled Microorganisms
Parallel Simulations of Self-propelled Microorganisms K. Pickl a,b M. Hofmann c T. Preclik a H. Köstler a A.-S. Smith b,d U. Rüde a,b ParCo 2013, Munich a Lehrstuhl für Informatik 10 (Systemsimulation),
More informationA Survey of HPC Systems and Applications in Europe
A Survey of HPC Systems and Applications in Europe Dr Mark Bull, Dr Jon Hill and Dr Alan Simpson EPCC, University of Edinburgh email: m.bull@epcc.ed.ac.uk, a.simpson@epcc.ed.ac.uk Overview Background Survey
More informationAntti-Pekka Hynninen, 5/10/2017, GTC2017, San Jose CA
S7255: CUTT: A HIGH- PERFORMANCE TENSOR TRANSPOSE LIBRARY FOR GPUS Antti-Pekka Hynninen, 5/10/2017, GTC2017, San Jose CA MOTIVATION Tensor contractions are the most computationally intensive part of quantum
More informationSolving RODEs on GPU clusters
HIGH TEA @ SCIENCE Solving RODEs on GPU clusters Christoph Riesinger Technische Universität München March 4, 206 HIGH TEA @ SCIENCE, March 4, 206 Motivation - Parallel Computing HIGH TEA @ SCIENCE, March
More informationThe APE Experience. Nicola Cabibbo. Università di Roma La Sapienza INFN Sezione di Roma. apenext: Computational Challenges and First Physics Results
The APE Experience Nicola Cabibbo Università di Roma La Sapienza INFN Sezione di Roma apenext: Computational Challenges and First Physics Results Nicola Cabibbo The APE experience 8/2/2007 1 / 21 Birth
More informationA random walk in lattice fields and extreme computing
A random walk in lattice fields and extreme computing Bálint Joó Jefferson Lab, Newport News, VA, USA Graduate Student Seminar Wed, Dec 17, 2014 Why Lattice QCD? Asymptotic Freedom in QCD Running Coupling
More informationCold QCD. Meeting on Computational Nuclear Physics. Washington, DC July Thomas Luu Lawrence Livermore National Laboratory
Cold QCD Meeting on Computational Nuclear Physics Washington, DC July 2012 Thomas Luu Lawrence Livermore National Laboratory This work was performed under the auspices of the U.S. Department of Energy
More informationThe nucleon mass and pion-nucleon sigma term from a chiral analysis of lattice QCD world data
The nucleon mass and pion-nucleon sigma term from a chiral analysis of lattice QCD world data L. Alvarez-Ruso 1, T. Ledwig 1, J. Martin-Camalich, M. J. Vicente-Vacas 1 1 Departamento de Física Teórica
More informationHow fast can we calculate?
November 30, 2013 A touch of History The Colossus Computers developed at Bletchley Park in England during WW2 were probably the first programmable computers. Information about these machines has only been
More informationHIGH PERFORMANCE COMPUTING SYSTEM IN THE FRAMEWORK OF THE HIGGS BOSON STUDIES AT ATLAS
HIGH PERFORMANCE COMPUTING SYSTEM IN THE FRAMEWORK OF THE HIGGS BOSON STUDIES AT ATLAS N. L. Belyaev 1,2,a, A. A. Klimentov 1,3, R. V. Konoplich 4, 5, D. V. Krasnopevtsev 1,2, K. A. Prokofiev 6, V. E.
More informationPerformance Analysis of Lattice QCD Application with APGAS Programming Model
Performance Analysis of Lattice QCD Application with APGAS Programming Model Koichi Shirahata 1, Jun Doi 2, Mikio Takeuchi 2 1: Tokyo Institute of Technology 2: IBM Research - Tokyo Programming Models
More informationarxiv: v1 [hep-lat] 10 Jul 2012
Hybrid Monte Carlo with Wilson Dirac operator on the Fermi GPU Abhijit Chakrabarty Electra Design Automation, SDF Building, SaltLake Sec-V, Kolkata - 700091. Pushan Majumdar Dept. of Theoretical Physics,
More informationParallel Asynchronous Hybrid Krylov Methods for Minimization of Energy Consumption. Langshi CHEN 1,2,3 Supervised by Serge PETITON 2
1 / 23 Parallel Asynchronous Hybrid Krylov Methods for Minimization of Energy Consumption Langshi CHEN 1,2,3 Supervised by Serge PETITON 2 Maison de la Simulation Lille 1 University CNRS March 18, 2013
More informationA Quantum Chemistry Domain-Specific Language for Heterogeneous Clusters
A Quantum Chemistry Domain-Specific Language for Heterogeneous Clusters ANTONINO TUMEO, ORESTE VILLA Collaborators: Karol Kowalski, Sriram Krishnamoorthy, Wenjing Ma, Simone Secchi May 15, 2012 1 Outline!
More informationEfficient Molecular Dynamics on Heterogeneous Architectures in GROMACS
Efficient Molecular Dynamics on Heterogeneous Architectures in GROMACS Berk Hess, Szilárd Páll KTH Royal Institute of Technology GTC 2012 GROMACS: fast, scalable, free Classical molecular dynamics package
More informationRecent developments in the tmlqcd software suite
Recent developments in the tmlqcd software suite A. Abdel-Rehim CaSToRC, The Cyprus Institute, Nicosia, Cyprus E-mail: a.abdel-rehim@cyi.ac.cy F. Burger, B. Kostrzewa Humboldt-Universität zu Berlin, Institut
More informationarxiv: v1 [hep-lat] 19 Jul 2009
arxiv:0907.3261v1 [hep-lat] 19 Jul 2009 Application of preconditioned block BiCGGR to the Wilson-Dirac equation with multiple right-hand sides in lattice QCD Abstract H. Tadano a,b, Y. Kuramashi c,b, T.
More informationFrom Piz Daint to Piz Kesch : the making of a GPU-based weather forecasting system. Oliver Fuhrer and Thomas C. Schulthess
From Piz Daint to Piz Kesch : the making of a GPU-based weather forecasting system Oliver Fuhrer and Thomas C. Schulthess 1 Piz Daint Cray XC30 with 5272 hybrid, GPU accelerated compute nodes Compute node:
More informationA Survey of HPC Usage in Europe and the PRACE Benchmark Suite
A Survey of HPC Usage in Europe and the PRACE Benchmark Suite Dr Mark Bull Dr Jon Hill Dr Alan Simpson EPCC, University of Edinburgh Email: m.bull@epcc.ed.ac.uk, j.hill@epcc.ed.ac.uk, a.simpson@epcc.ed.ac.uk
More informationOptimising PICCANTE an Open Source Particle-in-Cell Code for Advanced Simulations on Tier-0 Systems
Available online at www.prace-ri.eu Partnership for Advanced Computing in Europe Optimising PICCANTE an Open Source Particle-in-Cell Code for Advanced Simulations on Tier-0 Systems A. Sgattoni a, L. Fedeli
More informationCV of Roberto Frezzotti
CV of Roberto Frezzotti 1. Personal data Born in Rome (Italy), on December 21, 1966, Italian national; married to Lucia Brancucci since December 15, 2001. Private address: Via Treviso 15, 00161 -- Rome,
More informationAstronomical Computer Simulations. Aaron Smith
Astronomical Computer Simulations Aaron Smith 1 1. The Purpose and History of Astronomical Computer Simulations 2. Algorithms 3. Systems/Architectures 4. Simulation/Projects 2 The Purpose of Astronomical
More informationThe QMC Petascale Project
The QMC Petascale Project Richard G. Hennig What will a petascale computer look like? What are the limitations of current QMC algorithms for petascale computers? How can Quantum Monte Carlo algorithms
More informationAPPLICATION OF CUDA TECHNOLOGY FOR CALCULATION OF GROUND STATES OF FEW-BODY NUCLEI BY FEYNMAN'S CONTINUAL INTEGRALS METHOD
APPLICATION OF CUDA TECHNOLOGY FOR CALCULATION OF GROUND STATES OF FEW-BODY NUCLEI BY FEYNMAN'S CONTINUAL INTEGRALS METHOD M.A. Naumenko, V.V. Samarin Joint Institute for Nuclear Research, Dubna, Russia
More informationECE 574 Cluster Computing Lecture 20
ECE 574 Cluster Computing Lecture 20 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 18 April 2017 Announcements Project updates, related work. HW#8 was due Big Data: Last HW not
More informationDirect Self-Consistent Field Computations on GPU Clusters
Direct Self-Consistent Field Computations on GPU Clusters Guochun Shi, Volodymyr Kindratenko National Center for Supercomputing Applications University of Illinois at UrbanaChampaign Ivan Ufimtsev, Todd
More informationarxiv: v1 [hep-lat] 19 Dec 2012
arxiv:1212.4573v1 [hep-lat] 19 Dec 2012 University of Adelaide, Australia E-mail: waseem.kamleh@adelaide.edu.au A novel metric is introduced to compare the supercomputing resources available to academic
More informationSimulation Laboratories at JSC
Mitglied der Helmholtz-Gemeinschaft Simulation Laboratories at JSC Paul Gibbon Jülich Supercomputing Centre Jülich Supercomputing Centre Supercomputer operation for Centre FZJ Regional JARA Helmholtz &
More informationLattice QCD and flavour physics
Lattice QCD and flavour physics Vittorio Lubicz Workshop on Indirect Searches for New Physics at the time of LHC 15/02/2010-26/03/2010 OUTLINE: The accuracy of LQCD in the flavour sector the past (the
More informationNucleon generalized form factors with twisted mass fermions
Nucleon generalized form factors with twisted mass fermions Department of Physics, University of Cyprus, P.O. Box 537, 78 Nicosia, Cyprus, and Computation-based Science and Technology Research Center,
More informationParallel Multivariate SpatioTemporal Clustering of. Large Ecological Datasets on Hybrid Supercomputers
Parallel Multivariate SpatioTemporal Clustering of Large Ecological Datasets on Hybrid Supercomputers Sarat Sreepathi1, Jitendra Kumar1, Richard T. Mills2, Forrest M. Hoffman1, Vamsi Sripathi3, William
More informationWeather Research and Forecasting (WRF) Performance Benchmark and Profiling. July 2012
Weather Research and Forecasting (WRF) Performance Benchmark and Profiling July 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell,
More informationUpdate on Cray Earth Sciences Segment Activities and Roadmap
Update on Cray Earth Sciences Segment Activities and Roadmap 31 Oct 2006 12 th ECMWF Workshop on Use of HPC in Meteorology Per Nyberg Director, Marketing and Business Development Earth Sciences Segment
More informationA Framework for Hybrid Parallel Flow Simulations with a Trillion Cells in Complex Geometries
A Framework for Hybrid Parallel Flow Simulations with a Trillion Cells in Complex Geometries SC13, November 21 st 2013 Christian Godenschwager, Florian Schornbaum, Martin Bauer, Harald Köstler, Ulrich
More informationIntroduction to Benchmark Test for Multi-scale Computational Materials Software
Introduction to Benchmark Test for Multi-scale Computational Materials Software Shun Xu*, Jian Zhang, Zhong Jin xushun@sccas.cn Computer Network Information Center Chinese Academy of Sciences (IPCC member)
More informationLattice Monte Carlo for carbon nanostructures. Timo A. Lähde. In collaboration with Thomas Luu (FZ Jülich)
Lattice Monte Carlo for carbon nanostructures Timo A. Lähde In collaboration with Thomas Luu (FZ Jülich) Institute for Advanced Simulation and Institut für Kernphysik Forschungszentrum Jülich GmbH, D-52425
More informationNOAA s FV3 based Unified Modeling System Development Strategies
NOAA s FV3 based Unified Modeling System Development Strategies HFIP Annual Meeting, 8-9 Nov. 2017 HFIP Annual Meeting, Miami, FL; 8-9 Nov. 2017 1 NOAA s Modeling capabilities (Hurricane related) Global
More informationAdaptive Heterogeneous Computing with OpenCL: Harnessing hundreds of GPUs and CPUs
Adaptive Heterogeneous Computing with OpenCL: Harnessing hundreds of GPUs and CPUs Simon McIntosh-Smith simonm@cs.bris.ac.uk Head of Microelectronics Research University of Bristol, UK 1 ! Collaborators
More informationDepartment of Physics and Helsinki Institute of Physics, University of Helsinki
Mass anomalous dimension of SU(2) with N f = 8 using the spectral density method, P.O. Box 64, FI-00014 Helsinki, Finland E-mail: joni.suorsa@helsinki.fi Viljami Leino Jarno Rantaharju CP 3 -Origins, IFK
More informationMass Components of Mesons from Lattice QCD
Mass Components of Mesons from Lattice QCD Ying Chen In collaborating with: Y.-B. Yang, M. Gong, K.-F. Liu, T. Draper, Z. Liu, J.-P. Ma, etc. Peking University, Nov. 28, 2013 Outline I. Motivation II.
More informationImplementing NNLO into MCFM
Implementing NNLO into MCFM Downloadable from mcfm.fnal.gov A Multi-Threaded Version of MCFM, J.M. Campbell, R.K. Ellis, W. Giele, 2015 Higgs boson production in association with a jet at NNLO using jettiness
More informationAlgebraic Multi-Grid solver for lattice QCD on Exascale hardware: Intel Xeon Phi
Available online at www.prace-ri.eu Partnership for Advanced Computing in Europe Algebraic Multi-Grid solver for lattice QCD on Exascale hardware: Intel Xeon Phi A. Abdel-Rehim aa, G. Koutsou a, C. Urbach
More informationChiral Symmetry Breaking. Schwinger-Dyson Equations
Critical End Point of QCD Phase-Diagram: A Schwinger-Dyson Equation Perspective Adnan Bashir Michoacán University, Mexico Collaborators: E. Guadalupe Gutiérrez, A Ahmad, A. Ayala, A. Raya, J.R. Quintero
More informationThe Green Index (TGI): A Metric for Evalua:ng Energy Efficiency in HPC Systems
The Green Index (TGI): A Metric for Evalua:ng Energy Efficiency in HPC Systems Wu Feng and Balaji Subramaniam Metrics for Energy Efficiency Energy- Delay Product (EDP) Used primarily in circuit design
More informationLETKF Data Assimilation System for KIAPS AGCM: Progress and Plan
UMD Weather-Chaos Group Meeting June 17, 2013 LETKF Data Assimilation System for KIAPS AGCM: Progress and Plan Ji-Sun Kang, Jong-Im Park, Hyo-Jong Song, Ji-Hye Kwun, Seoleun Shin, and In-Sun Song Korea
More informationPerformance of machines for lattice QCD simulations
Performance of machines for lattice QCD simulations Tilo Wettig Institute for Theoretical Physics University of Regensburg Lattice 2005, 30 July 05 Tilo Wettig Performance of machines for lattice QCD simulations
More informationPerspectives of GPU Computing in Physics and Astrophysics
Perspectives of GPU Computing in Physics and Astrophysics Dep. of Physics, Sapienza Università di Roma, September 15-17 2014 Organized by Dep. of Physics Sapienza, IAC, INFN, INGV, Roma, Italy Sponsored
More informationA microsecond a day keeps the doctor away: Efficient GPU Molecular Dynamics with GROMACS
GTC 20130319 A microsecond a day keeps the doctor away: Efficient GPU Molecular Dynamics with GROMACS Erik Lindahl erik.lindahl@scilifelab.se Molecular Dynamics Understand biology We re comfortably on
More informationCL 2 QCD - Lattice QCD based on OpenCL
CL 2 QCD - Lattice QCD based on OpenCL Owe Philipsen 1, Christopher Pinke 1, Alessandro Sciarra 1, Matthias Bach 2 1 ITP, Goethe-Universität, Max-von-Laue-Str. 1, 60438 Frankfurt am Main 2 FIAS, Goethe-Universität,
More informationCRYSTAL in parallel: replicated and distributed (MPP) data
CRYSTAL in parallel: replicated and distributed (MPP) data Lorenzo Maschio Dipar0mento di Chimica, Università di Torino lorenzo.maschio@unito.it Several slides courtesy of Roberto Orlando lorenzo.maschio@unito.it
More informationPoS(EPS-HEP2011)179. Lattice Flavour Physics
Rome University Tor Vergata" and INFN sez. Rome Tor Vergata" E-mail: nazario.tantalo@roma.infn.it I briefly discuss recent lattice calculations of a selected list of hadronic matrix elements that play
More informationLattice QCD From Nucleon Mass to Nuclear Mass
At the heart of most visible m Lattice QCD From Nucleon Mass to Nuclear Mass Martin J Savage The Proton Mass: At the Heart of Most Visible Matter, Temple University, Philadelphia, March 28-29 (2016) 1
More informationTowards thermodynamics from lattice QCD with dynamical charm Project A4
Towards thermodynamics from lattice QCD with dynamical charm Project A4 Florian Burger Humboldt University Berlin for the tmft Collaboration: E.-M. Ilgenfritz (JINR Dubna), M. Müller-Preussker (HU Berlin),
More information