KAGRA DMG Status. 27th June 2015, Gwangju, Korea KAGRA DMG group

Similar documents
Status of the KAGRA Project

LIGO workshop What Comes Next for LIGO May 7-8, 2015, Silver Spring, MD KAGRA KAGRA. Takaaki Kajita, ICRR, Univ. of Tokyo for the KAGRA collaboration

Current status of KAGRA data analysis, detector characterization. Kazuhiro Hayama (NAOJ)

Takaaki Kajita, JGRG 22(2012) Status of KAGRA RESCEU SYMPOSIUM ON GENERAL RELATIVITY AND GRAVITATION JGRG 22. November

Laser Interferometer Gravitational-Wave Observatory (LIGO)! A Brief Overview!

LIGO Grid Applications Development Within GriPhyN

Computing Infrastructure for Gravitational Wave Data Analysis

ETH Beowulf day January 31, Adrian Biland, Zhiling Chen, Derek Feichtinger, Christoph Grab, André Holzner, Urs Langenegger

ALMA Development Program

The GW research in Japan - Current Status of KAGRA -

arxiv:astro-ph/ v1 28 Jun 2002

Data Analysis of TAMA

The Status of KAGRA Underground Cryogenic Gravitational Wave Telescope

Status of the International Second-generation Gravitational-wave Detector Network

LIGOʼs first detection of gravitational waves and the development of KAGRA

LIGO-Virgo Detector Characterization (DetChar)

A Data Communication Reliability and Trustability Study for Cluster Computing

Data Replication in LIGO

PI SERVER 2012 Do. More. Faster. Now! Copyr i g h t 2012 O S Is o f t, L L C. 1

Sheffield Computing. Matt Robinson Elena Korolkova Paul Hodgson

Cluster Computing: Updraft. Charles Reid Scientific Computing Summer Workshop June 29, 2010

Today s Agenda: 1) Why Do We Need To Measure The Memory Component? 2) Machine Pool Memory / Best Practice Guidelines

Report from sub-groups Cryogenic Cryogenic payload

Administrivia. Course Objectives. Overview. Lecture Notes Week markem/cs333/ 2. Staff. 3. Prerequisites. 4. Grading. 1. Theory and application

CMS: Priming the Network for LHC Startup and Beyond

Red Sky. Pushing Toward Petascale with Commodity Systems. Matthew Bohnsack. Sandia National Laboratories Albuquerque, New Mexico USA

TSCCLOCK: A LOW COST, ROBUST, ACCURATE SOFTWARE CLOCK FOR NETWORKED COMPUTERS

Weather Research and Forecasting (WRF) Performance Benchmark and Profiling. July 2012

MSC HPC Infrastructure Update. Alain St-Denis Canadian Meteorological Centre Meteorological Service of Canada

Data analysis of massive data sets a Planck example

NEW COMPACT OCEAN BOTTOM CABLED SYSTEM FOR SEISMIC AND TSUNAMI OBSERVATION

Exascale I/O challenges for Numerical Weather Prediction

An Overview of HPC at the Met Office

The Zadko Telescope: the Australian Node of a Global Network of Fully Robotic Follow-up Telescopes

One Optimized I/O Configuration per HPC Application

Performance of WRF using UPC

Status of KEK-E391a and Future Prospects on K L π 0 νν at KEK. GeiYoub Lim IPNS, KEK

Portal for ArcGIS: An Introduction

Knowledge Discovery and Data Mining 1 (VO) ( )

Gravitational Wave Data (Centre?)

Current Status of Chinese Virtual Observatory

CS425: Algorithms for Web Scale Data

High-Energy Physics, ATLAS and Trans- Pacific Collaboration Opportunities

Cosmic Ray Detector Software

AstroPortal: A Science Gateway for Large-scale Astronomy Data Analysis

Astronomy of the Next Decade: From Photons to Petabytes. R. Chris Smith AURA Observatory in Chile CTIO/Gemini/SOAR/LSST

A review of (sub)mm band science and instruments in the ALMA era

MapOSMatic, free city maps for everyone!

Introduction to Portal for ArcGIS. Hao LEE November 12, 2015

Big Computing in High Energy Physics. David Toback Department of Physics and Astronomy Mitchell Institute for Fundamental Physics and Astronomy

SVOM Science Ground Segment

ALMA/ASTE/Mopra Report

NCS. New Control System for the 30m Telescope. TBD. 2005

LBA Operations. Chris Phillips & Phil Edwards 19 November 2015 CSIRO ASTRONOMY & SPACE SCIENCE

Workstations at Met Éireann. Kieran Commins Head Applications Development

Smart Sensing Embedded Spectroscopy Platform Botlek studiegroep 06-april-2017

The ATLAS trigger - commissioning with cosmic rays

Plans for Unprecedented Imaging of Stellar Surfaces with the Navy Precision Optical Interferometer (NPOI)

MapOSMatic: city maps for the masses

LOFAR OBSERVING: INTERACTION USER RADIO OBSERVATORY. R. F. Pizzo

Operational Forecasting With Very-High-Resolution Models. Tom Warner

Introduction to Portal for ArcGIS

RECENT ADVANCES TO EXPERIMENTAL GMS ATMOSPHERIC MOTION VECTOR PROCESSING SYSTEM AT MSC/JMA

Radio Transients with Apertif Joeri van Leeuwen

Development of ground based laser interferometers for the detection of gravitational waves

From LOFAR to SKA, challenges in distributed computing. Soobash Daiboo Paris Observatory -LESIA

Unidata Community Equipment Awards Cover Sheet. Proposal Title: Upgrading the Rutgers Weather Center to Meet Today s Needs

Hellenic National Meteorological Service (HNMS) GREECE

Performance and Application of Observation Sensitivity to Global Forecasts on the KMA Cray XE6

Annual WWW Technical Progress Report. On the Global Data Processing and Forecasting System 2004 OMAN

WRF performance tuning for the Intel Woodcrest Processor

Data Management Plan Extended Baryon Oscillation Spectroscopic Survey

MONTHLY OPERATIONS REPORT

Exploiting Virtual Observatory and Information Technology: Techniques for Astronomy

How to make the most of LHC data

ECMWF Computing & Forecasting System

Essentials of Large Volume Data Management - from Practical Experience. George Purvis MASS Data Manager Met Office

An Introduction to ASKAP Bringing Radio Interferometers Into the Multi-pixel Era

A NEW SYSTEM FOR THE GENERATION OF UTC(CH)

AstroPortal: A Science Gateway for Large-scale Astronomy Data Analysis

Using Non Harmonic Analysis (NHA) to reduce the influences of line noises for GW Observatory

ArcGIS Deployment Pattern. Azlina Mahad

Towards a highly-parallel PDE-Solver using Adaptive Sparse Grids on Compute Clusters

The ATLAS Run 2 Trigger: Design, Menu, Performance and Operational Aspects

ATLAS EXPERIMENT : HOW THE DATA FLOWS. (Trigger, Computing, and Data Analysis)

ArcGIS GeoAnalytics Server: An Introduction. Sarah Ambrose and Ravi Narayanan

NAOYUKI TAMURA Subaru Instrument Astronomer Subaru Telescope, NAOJ

The Analysis of Microburst (Burstiness) on Virtual Switch

Quantum ESPRESSO Performance Benchmark and Profiling. February 2017

Multi-Approximate-Keyword Routing Query

Stochastic Modelling of Electron Transport on different HPC architectures

RLW paper titles:

2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51

Hall-D Update. Eric Pooser. Joint Hall A/C Summer Collaboration Meeting 07/18/2015

Pysynphot: A Python Re Implementation of a Legacy App in Astronomy

Hunting for Anomalies in PMU Data

Scalable Tools for Debugging Non-Deterministic MPI Applications

DEVELOPMENT OF A DMT MONITOR FOR STATISTICAL TRACKING OF GRAVITATIONAL-WAVE BURST TRIGGERS GENERATED FROM THE OMEGA PIPELINE

Some thoughts about energy efficient application execution on NEC LX Series compute clusters

Synergie PC Updates and Plans. A. Lasserre-Bigorry, B. Benech M-F Voidrot, & R. Giraud

Transcription:

JGW-G1503736-v3 KAGRA DMG Status 27th June 2015, Gwangju, Korea KAGRA DMG group leader : N.Kanda sub-leader : K.Oohara effort members : S.Eguchi, K.Hayama, Y.Hiranuma, Y.Itoh, M.Kaneyama, G.Kang, S.Miyoki, W.T.Ni, K. Sakai, Y.Sasaki, H.Tagoshi, H.Takahashi, K.Tanaka, S. Ueki, T.Yamamoto, T.Yokozawa, H.Yuzurihara, X.Zhai

KAGRA data have to be distributed. KAGRA Tier from detector site to archives and computers distribute to collaboraters including oversea to/from other observatories GW Follow-ups / Counterparts 2

Target of DMG (Data Management subsystem) Data Management Group (DMG) is in charge of data collection, distribution and archiving. The tasks : KAGRA data system sending data from Kamioka to Kashiwa and others Hardware construction Software develop and implementation Process calibrated data h(t) and h(f) which can use data analysis immediately

Overview of Data Flow of KAGRA Overview of KAGRA data flow Data Sharing with Other obs. Oversea GW experiments (low latency h(t)) Oversea GW experiments (bulk of data) less amount larger amount Tier-0 in KAGRA Detector (tunnel) raw data ~20MB/s Tier-0.5 Kamioka Proc. data ~1MB/s (option : raw data without permanent store) raw data ~20MB/s + Proc. data ~1MB/s Osaka City U. Tier-0.5 for low latency alert data base Kashiwa Tier-0 arcive Tier-1 Proc. data + partial raw data raw + Proc. data Tier-2 RESCEU (TBD) Mirror (TBD) Tier-1 archive socket socket or/and alternative GRID GRID or alternative low latency (h(t)) Alart in GCN format Event Alert to Counterparts/ Follow-ups partial raw&proc. data set follow-ups / counterparts faster (upstream) later (downstream) Tier-3 end user sites

Ranks of KAGRA data 1. Raw data derived from the interferometer and any experimental apparatus. 2. Calibrated data (pre-processed data) will be generated on-line at Kamioka site. Data consists of h(t), h(f) and detecter characterization results. Spool storage will be prepare on-site. Calibrated data will transfer to Kashiwa continuous and smoothly without delay. 3. Main mass storage at Kashiwa. Primary archive of raw data and calibrated data 4. Low latency distribution with limited data, e.g. h(t) and data quality flags. This distribution is for transient event searches. This may include Partial distribution listed below. 5. Mirroring Full data including raw data. 6. Partial distribution GW channel and least channels that need typical searches, without raw data.

Amount of Data phase duration data rate / duty total expected amount from -> to ikagra about 1 month at December 2015 20MB/s / 100% 100 TiB Kamioka -> Kashiwa 1MB/s / 100% 5TiB Kamioka -> Osaka City U./Osaka U. commissioning 2016-2017 20MB/s /?(5~10%)? Kamioka -> Kashiwa 1MB/s /?(5~10%)? Kamioka -> Osaka City U./Osaka U. bkagra 2017-20MB/s / 100% 3PB / 5yrs Kamioka -> Kashiwa Detector (tunnel) raw data ~20MB/s 1MB/s / 100% Kamioka raw data ~20MB/s + Proc. data ~1MB/s 150 TiB / 5yrs Kashiwa Tier-0 raw + Proc. data Kamioka -> Osaka City U./Osaka U. Mirror archive Tier-1 Proc. data ~1MB/s partial data set Tier-2 Osaka U./ Osaka City U. The Eighth Japan-Korea Joint Workshop on KAGRA, June 27, 2015, Gwangju, for low KOREA latency search Tier-2

Tier of Data Distribution We did not decided specific sites for Tier-1,2,3 completely. This is tentative plan. Tier Site(s) Purpose Raw Calibrated Detecter Characterizati on Amount of data for 5yrs event alert Kamioka DAQ partial (spool) partial (spool) partial (spool) 200TB partial 0 Kashiwa Main Storage 5PB (Not yet determined) 0.5 Osaka City, Osaka Low latency NA or small amount 500TB 1 (undecided) Full Mirroring 5PB NA 2 cand. : RESCEU and some. (Analysisbases) Offline searches NA 500TB NA 3 End users Developme nt NA partial partial (Not yet discussed) NA

ikagra Data Transfer System Hardwares for ikagra data are installed already. Kamioka > Kashiwa (Tier-0) (bkagra final storage is not yet procured.) Tier-0.5 for low latency hardware was installed at Feb.2015. at Osaka City U. and Osaka U. One of Tier-2 RESCEU prepared the data storage. Kamioka VPN Software is urgent task now. RESCEU (Connection protocol is not fixed yet.) Osaka City U. /Osaka U. Kashiwa

placed at computer area beside the control room, 1st floor of analysis build. 9 200 TiB lustre file system

hardware structure IFO, DSG frontend SINET Environmental Monitor (EPICS layer) frame writer k1fw0 data concentrator hostname HUB 10G low latency hostname frame writer k1fw1 data transfer aldebaran HUB 10G / 1G VPN gwave_kamioka login / job man. taurus-01 login / job man. taurus-02 VPN gwave_kashiwa login server perseus-01 HUB 10G / 1G login server perseus-02 ICRR interoperable computer system calculation pleiades-01 calculation pleiades-02 calculation pleiades-03 calculation pleiades-04 disk array primary data server k1dm0 / hyades-0 20TB primary data server k1dm0 / hyades-0 20TB Infiniband SW NDS hostname MDS/OSS algol-01 DetChar hostname HUB 1G HUB 10G monitor hosts in control room Dedicated optical fiber 4.5 km, tunnel <-> surface build. MDS crab-mds-01 Infiniband SW MDS crab-mds-02 Infiniband SW OSS crab-oss-01 OSS crab-oss-02 MDT/OST disk array MDT disk array OST disk array OST disk array KAGRA s raw data rate : ~ 20MB/s (~630 TB/yr) ikagra data system overview Drawn by N.Kanda last update : 2014/8/19

Servers at Osaka (for Low Latency Search) Two cluster systems at Osaka City U. and Osaka U. were upgraded at Feb. 2015 E5-2697v3 (2.6GHz 14cores x 2, 128GB memory for 1 node) x 7nodes x 2 sites -> total 392 cores Scientific Linux Condor IPCOM VPN SR-X340TR1 LAN 10GBASE-CR LAN 1000BASE-T LAN 1000BASE-T LAN 100BASE-T Osaka U. system will move to Osaka City U. in this July. RX350S7 LDAP NFS DISK RX200S7 Condor RX200S7 RX2540M1 7 RX2540M1 SH1508ATC SH1508ATC The clusters will be used for low latency search. SmartUPS SmartUPS

Transfer Test Tunnel (via dedicated optical fiber, 10GB) > Analysis build. on surface using scp*, continuous transfer of dummy files, done by system vender (Fujitsu Co.Ltd.) It marked ~150MB/s. using scp, 320MB frame data, intermittent transfer every16sec by T.Yamamoto ~150MB/s during data sending using via socket transfer** by Oohara ~ 1GB/s during data sending Analysis build. on surface (SINET) > Kashiwa using scp, 320MB frame data, intermittent transfer every16sec by T.Yamamoto ~50MB/s during data sending using via socket transfer** by Oohara ~50MB/s during data sending * scp including file I/O, transfer, encrypt, decrypt* ** socket transfer only data, exclude file I/O Specification requirement : 40MB/s KAGRA raw data : 20MB/s

Transfer Test (GRID) KEK ( high-energy institute in Japan) supports KAGRA V.O. Testing GRID and VO usage between our hosts and KEK (by Y.Itoh) Testing data transfer our local GRID machines. at Nagaoka Tech. U. (by Sasaki & Takahashi) Globus!Toolkit! GRID! "!GridFTP! "!GRAM!!! *.nagaokaut.ac.jp FTP GridFTP! Globus!Toolkit A! centos6.2! GT5.2.5! simpleca! MyProxy! GridFTP! GRAM5! GRAM B! centos4.7! GT5.2.3! MyProxy! GridFTP! GRAM5!

Software of DMG A lot of things have to be done in this summer frame data from digital control system of the interferometer Socket based file transfer between Tunnel -> Surface of Kamioka -> Kashiwa Data will also send to Osaka City U. Kashiwa server will be a Tier-0 storage.

KAGRA data file i/o, transfer Mine (DGS system) frame writer ( raw data, 16sec ) command / operation log messges Shared mem I/O file I/O, other access Kamioka (aldebaran) frame writer ( raw data, EPICS ch, slow ) kdmg_shm (shared memory manage) log record/display last record of status log file Mine (hyadess) Shared Memory (Status of DAQ) kdmg_log (log record/display) status monitor log file last record of status kdmg_shm (shared memory manage script) start/stop/abort kdmg_ctrl (control proc) Shared Memory (Status of DAQ) status monitor kdmg_raw_receiver (data reciver) start/stop/abort raw data NFS raw data (EPICS) NFS start/stop/abort kdmg_ctrl (control proc) fork kdmg_watch (new data watch) fork ext disk (FC, NFS) kdmg_watch (new data watch) raw fork lustre disk system (FEFS) kdmg_procdata (calibration and proc data generator) proc fork kdmg_raw_sender (data sender) proc raw kdmg_raw_sender (data sender) kdmg_proc_sender (data sender) Kamioka (aldebaran) Kashiwa (perseus) Osaka City U. (Shingakujyutu clusters)

Osaka City U. (Shingakujyutu clusters) Kashiwa (perseus) log record/display kdmg_shm (shared memory manage) log file We are employing last record of status socket transfer Shared Memory (Status of DAQ) (status monitor) parity check raw kdmg_raw_receiver (data reciver) kdmg_proc_receiver (data reciver) start/stop/abort kdmg_ctrl (control proc) log system (log daemon) shared memory lustre disk system (FEFS) proc Tier-0.5 : Osaka City U. (Shingakujyutu clusters) and Tier-2 Kamioka (aldebaran) proc data. kdmg_shm (shared memory manage) log record/display (xinetd) These are mature and solid techniques. storage raw data stream proc data last record of status Shared Memory (Status of DAQ) (status monitor) log file Work hard! proc kdmg_proc_receiver (data reciver) kdmg_ctrl (control proc) disk system

Schedule Delay on the software construction Now

Schedule Delay on the software construction Now

Schedule Delay on the software construction Now