An end-to-end simulation framework for the Large Synoptic Survey Telescope Andrew Connolly University of Washington
LSST in a nutshell The LSST will be a large, wide-field, ground-based optical/near-ir survey of half the sky in ugrizy bands to r~27.5 based on 1000 visits over a 10-year period Alerts of detected changes on the night sky will be published within 60 sec of the observation as the survey progresses LSST will enable a wide variety of complementary scientific investigations: from searches for small bodies in the solar system, to precision astrometry of the Galaxy, to systematic measures of cosmology using gravitational weak lensing. Much of the science of the LSST will be systematics limited
LSST footprint (825 visits per field) 90% of survey is 18,000 sq degree main survey 10% of survey is NES, SCP, Galactic plane, deep drilling fields, others How do we optimize the fields, the observations, and the temporal sampling?
LSST science and engineering simulation tools Python simulation framework Python provides a (somewhat) simplified interface to the simulation catalogs Databases contain the astrophysical sky (>10TB of data) Agnostic to where the cosmological data reside Tools to convert simulated data to observations Not all science applications require the same level of fidelity (e.g. derived catalogs can be used in place of images) Measurements must come with estimates of the uncertainties All software is version controlled and open source https://github.com/lsst
The LSST universe model (CatSim) Source counts are based on Millennium simulations of the universe matched to observed densities and color of sources (to reproduce observed number counts, size, and redshifts. We can model the systematics due to the optical design, sensor model, etc to deliver SNR and errors in the mock catalogs
Cosmology embedded astrophysical background Galactic structure model (with dust) Main sequence, giants, dwarfs, cepheids, micro-lensing Proper motion, parallax Juric et al (2008) Solar system model 10 million main belt KBO, TNO, Trojans. Grav et al (2007)
Survey performance tools (OpSim) Lynne Jones 2015 Constraints on observations: properties of the site (e.g. sky background, visibility) system performance (settle time, read out time) science requirements
Metrics Analysis Framework (MAF) A SN goes off every 5 days in each ~3 sq degree region. How many are well observed? 40 80 120 160 200 240 280 320 max possible ~600, we get 200-300 in the wide area Dropping the requirement for 2 visits per night gives a 15% increase in Sne equivalent to 1 year of operations
Following the photon flow to generate images John Peterson John Peterson 2010
ImSim Description 10
3.2 Gigapixels 9.6 sq. degrees 20 million sources 10 10 photons 12.8 Gbytes 1000 CPU hours
An end-to-end simulation framework
DESC Simulation needs 2017 -- 2018 2016 -- 2017 2018 -- 2019
Cosmology simulations: strong lensing Simulated lensed quasar systems sprinkled into catalogs at positions of AGN-host galaxies Host galaxies removed, and lens galaxies inserted, followed by multiple images of the original source Damped random walk variability
Cosmology simulations: supernova Supernova simulations Being well observed requires good temporal sampling in multiple bands to recover the light curve
Conclusions Open source simulation framework for reproducing LSST observations (cosmology embedded in astrophysical backgrounds) Ability to embed other cosmological models and databases within the framework We d like SEDs for each source and realistic images (an SED per pixel) Simulated source catalogs through to images Current work is on scalability and simplification of interfaces
0.2 Following the photon flow to generate images Op2cal Model +Tracking +Diffrac2on +Det Perturba2ons +Lens Perturba2ons +Mirror Perturba2ons +Detector +Dome Seeing +Low Al2tude +Mid Al2tude +High Al2tude +Pixeliza2on Atmosphere Atmosphere Atmosphere Peterson et al 2015 John Peterson 2010
Cosmology simulations: supernova Supernova simulations Being well observed requires good temporal sampling in multiple bands to recover the light curve Deep Drilling Fields WFD Fields
Conservation laws on the LSST m 5 = 24.7 +1.25log(t vis /30sec) t revisit = 3 days (t vis /30sec) N vis =1000 (30 sec/t vis )(T /10years)! Lower limit: surveying efficiency must be high (readout time, slew time) depth per visit must be deep (SNe, RR Lyrae, NEOs) Upper limit: the mean revisit time cannot be too long (SNe, NEOs) the number of visits must be large enough (systematics) trailing losses for moving objects must be small
Astrometric accuracy: complement to Gaia Ivezić, Beers, Jurić 2012, ARA&A, 50, 251 Gaia: excellent astrometry (and photometry), but only to r<20 LSST: photometry to r<27.5 and time resolved measurements to r<24.5 (10 mas) σ π (t) = 3.0 mas * (t / 10 yr) -1/2 σ μ (t) = 1.0 mas/yr * (t / 10 yr) -3/2 Photometric, proper motion and trigonometric parallax errors are similar around r=20
Photometric accuracy of LSST Fiducial Red Sequence Galaxy redshift i<25 Fiducial Lyman-Break Galaxy 1% photometry (0.5% repeatability) Deep fields AB~28 th mag About 10 billion galaxies, with 4 billion in a gold sample (i<25.3) The gold sample extends to redshifts of >2.5: evolution
LSST data volume and scientific yields Two 6.4-gigabyte images (one visit) every 39 seconds (15TB per night) ~1000 visits each night, ~300 nights a year Up to 450 calibration exposures per day Raw Data Can detect >10 million real time events per night, for 10 years Changes detected, transmitted, within 60 seconds of the observation Observe ~38 billion objects (24B galaxies, 14B stars) Collect ~5 trillion observations ( sources ) and ~32 trillion measurements ( forced sources ) in a 20 PB catalog User databases and workspaces ( mydb ) Making the LSST software available to end-users Feeding the data back to the community Level 1 Level 2 Level 3