Image Processing in Astronomy: Current Practice & Challenges Going Forward

Similar documents
LSST Pipelines and Data Products. Jim Bosch / LSST PST / January 30, 2018

More Optical Telescopes

Data Processing in DES

An end-to-end simulation framework for the Large Synoptic Survey Telescope Andrew Connolly University of Washington

Spitzer Space Telescope

Doing astronomy with SDSS from your armchair

Transient Alerts in LSST. 1 Introduction. 2 LSST Transient Science. Jeffrey Kantor Large Synoptic Survey Telescope

Astronomical image reduction using the Tractor

Foundations of Astronomy 13e Seeds. Chapter 6. Light and Telescopes

From DES to LSST. Transient Processing Goes from Hours to Seconds. Eric Morganson, NCSA LSST Time Domain Meeting Tucson, AZ May 22, 2017

Telescopes, Observatories, Data Collection

The Large Synoptic Survey Telescope

2-D Images in Astronomy

SDSS Data Management and Photometric Quality Assessment

An Adaptive Autoguider using a Starlight Xpress SX Camera S. B. Foulkes, Westward, Ashperton, Nr. Ledbury, HR8 2RY. Abstract

Celeste: Scalable variational inference for a generative model of astronomical images

Astronomy of the Next Decade: From Photons to Petabytes. R. Chris Smith AURA Observatory in Chile CTIO/Gemini/SOAR/LSST

Photometric Products. Robert Lupton, Princeton University LSST Pipeline/Calibration Scientist PST/SAC PST/SAC,

The shapes of faint galaxies: A window unto mass in the universe

Searching for Needles in the Sloan Digital Haystack

Real-time Variability Studies with the NOAO Mosaic Imagers

Problem Solving. radians. 180 radians Stars & Elementary Astrophysics: Introduction Press F1 for Help 41. f s. picture. equation.

BigBOSS Data Reduction Software

Machine Learning Applications in Astronomy

Astr 2310 Thurs. March 3, 2016 Today s Topics

The Sloan Digital Sky Survey

Narrowband Imaging, or How to Image in Downtown Portland. Duncan Kitchin OMSI Astro Imaging Conference 2010

Detection of Artificial Satellites in Images Acquired in Track Rate Mode.

Tim Axelrod, Jacek Becla, Gregory Dubois-Felsmann, Jeff Kantor, Kian- Tat Lim, Robert Lupton LDM-151

Astrometry (STEP) M. Shao

Real Astronomy from Virtual Observatories

Telescopes. Optical Telescope Design. Reflecting Telescope

The Dark Energy Survey Public Data Release 1

Optical/IR Observational Astronomy Telescopes I: Optical Principles. David Buckley, SAAO. 24 Feb 2012 NASSP OT1: Telescopes I-1

How do telescopes work? Simple refracting telescope like Fuertes- uses lenses. Typical telescope used by a serious amateur uses a mirror

Introduction to SDSS -instruments, survey strategy, etc

Tim Axelrod, Jacek Becla, Gregory Dubois-Felsmann, Jeff Kantor, Kian- Tat Lim, Robert Lupton LDM-151

Examining Dark Energy With Dark Matter Lenses: The Large Synoptic Survey Telescope. Andrew R. Zentner University of Pittsburgh

Telescopes. Optical Telescope Design. Reflecting Telescope

Modern Observational/Instrumentation Techniques Astronomy 500

Science Results Enabled by SDSS Astrometric Observations

Lecture 2. September 13, 2018 Coordinates, Telescopes and Observing

Chapter 5: Telescopes

BOWSER Balloon Observatory for Wavelength and Spectral Emission Readings

How to Measure and Record Light Spectrograph. The Photographic plate now obsolete Turbulence

Collecting Light. In a dark-adapted eye, the iris is fully open and the pupil has a diameter of about 7 mm. pupil

Lecture 9: Speckle Interferometry. Full-Aperture Interferometry. Labeyrie Technique. Knox-Thompson Technique. Bispectrum Technique

Learning an astronomical catalog of the visible universe through scalable Bayesian inference

Grand Canyon 8-m Telescope 1929

FLAT FIELDS FROM THE MOONLIT EARTH

Detecting Near Earth Asteroids with a Constellation of Cubesats with Synthetic Tracking Cameras. M. Shao, S. Turyshev, S. Spangelo, T. Werne, C.

3/7/2018. Light and Telescope. PHYS 1411 Introduction to Astronomy. Topics for Today s class. What is a Telescopes?

AST 101 Intro to Astronomy: Stars & Galaxies

Imaging with SPIRIT Exposure Guide

Large Synoptic Survey Telescope

The SDSS is Two Surveys

arxiv:astro-ph/ v1 31 Aug 2005

Modern Image Processing Techniques in Astronomical Sky Surveys

MegaPipe: Supporting the CFHT Large (and Largish) programs. Stephen Gwyn Canadian Astronomy Data Centre

JINA Observations, Now and in the Near Future

ASTR-1010: Astronomy I Course Notes Section VI

Big-Data as a Challenge for Astrophysics

A Ramble Through the Night Sky

A Random Walk Through Astrometry

Astronomy A BEGINNER S GUIDE TO THE UNIVERSE EIGHTH EDITION

Studying the universe

1 The Preliminary Processing

Speckles and adaptive optics

Simulations for H.E.S.S.

1. Give short answers to the following questions. a. What limits the size of a corrected field of view in AO?

Telescopes: Portals of Discovery Pearson Education, Inc.

Astronomical "color"

Pole searching algorithm for Wide-field all-sky image analyzing monitoring system

Astronomy. Optics and Telescopes

Lab 4: Differential Photometry of an Extrasolar Planetary Transit

AS750 Observational Astronomy

What bandwidth do I need for my image?

Some ML and AI challenges in current and future optical and near infra imaging datasets

Capturing and Processing Deep Space Images. Petros Pissias Eumetsat Astronomy Club 15/03/2018

AstroBITS: Open Cluster Project

Modeling the CCD Undersampling Effect in the BATC Photometric System

Design considerations for beyond DESI David Schlegel, Berkeley Lab

IMPROVING THE DECONVOLUTION METHOD FOR ASTEROID IMAGES: OBSERVING 511 DAVIDA, 52 EUROPA, AND 12 VICTORIA

The SDSS Data. Processing the Data

The well-composed image was recorded over a period of nearly 2 hours as a series of 30 second long, consecutive exposures on the night of October 5.

Stochastic Modeling of Astronomical Time Series

Lecture 12: Distances to stars. Astronomy 111

Microlensing Studies in Crowded Fields. Craig Mackay, Institute of Astronomy, University of Cambridge.

INTRODUCTION TO THE TELESCOPE

Classifying Galaxy Morphology using Machine Learning

Scintillation by Extraterrestrial Refractors

SEQUENCING THE STARS

STUDIES OF SELECTED VOIDS. SURFACE PHOTOMETRY OF FAINT GALAXIES IN THE DIRECTION OF IN HERCULES VOID

Weak Lensing (and other) Measurements from Ground and Space Observatories

Topics for Today. Clicker Q: Radio Waves. Radios. Discussion of how do ROTATING STARS yield Doppler-broadened spectral emission lines

(Present and) Future Surveys for Metal-Poor Stars

The Australia Telescope. The Australia Telescope National Facility. Why is it a National Facility? Who uses the AT? Ray Norris CSIRO ATNF

Observing Dark Worlds

Detection of Exoplanets Using the Transit Method

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery

Transcription:

Image Processing in Astronomy: Current Practice & Challenges Going Forward Mario Juric University of Washington With thanks to Andy Connolly, Robert Lupton, Ian Sullivan, David Reiss, and the LSST DM Team

An analog universe (1950s) Visual inspection: 1.6 million pixels 10x10 arcmin pixels Processing rate Analysis 1947 1954 Published 1967 Shane-Wirtanen 1967

A digitized universe (1980 s) Microdensitometers: 10 12 pixel sky 0.5 arcsec pixels Processing rate 4 hours a plate 200 Mhz Pentium-pro APM machine

A digital universe (2000s) Mosaic Cameras 10 11 pixels sky 0.45 arcsec pixels Processing rate 4 MB/s 250K lines of code SDSS

The Large Synoptic Survey Telescope 30 m diameter dome 1.2 m diameter atmospheric telescope Control room and heat producing equipment (lower level) 1,380 m 2 service and maintenance facility Calibration Screen Base Facility Stray light and Wind Screen 350 ton telescope A project to build the biggest optical astronomical survey in existence, bringing data to topics ranging from the Solar System to the nature of Dark Energy.

LSST camera: A 3.2 Gigapixel camera Modular design: 3200 Megapix = 189 x16 Megapix CCD 9 CCDs share electronics: raft (21=camera) 100 μm deep depletion devices (10 μm pixels)

Sensors at scale: 6.5x10 9 pixels sky 0.2 arcsec pixels Processing rate 170 MB/s 1.1M lines of code Credit: HSC Credit: John Peterson

A Big Data Universe 1 chip (4kx4k, 18 bits/pixel), 0.5% of the full image. Expect ~2000 exposures per night, 300 nights a year, for 10 years. Roughly 4 PB of raw imaging data per year.

What do astronomers care about? - What s on the image? - Stars (point sources) - Galaxies (extended objects) - Where is it? - Relatively (in pixels, to ~few hundredths of a pixel) - Absolutely (coordinates on the sky) - How bright is it? - Is it changing in time? - Is it moving? - What is its shape? - Of a particular object - Statistically, for a class of objects

How do astronomical images come to be? Credit: John Peterson (Purdue) and the PhoSim Team

IMSIM DESCRIPTION 13

Point Spread Function Estimation

Learning by fitting models Object characterization (models): Stars: Point Source model Galaxies: Double exponential models Instrumental Signature Removal + PSF Model Model Realization Goodness of Fit

1,231,051,050 rows (SDSS DR10, PhotoObjAll table) ~500 columns

Cataloging the Sky What we re doing is decomposing and modeling the sky in a way that makes physical sense.

Compressing the Sky What we re doing is decomposing and modeling the sky in a way that makes physical sense. But you may also think of this as developing a very fancy lossy compression technique.

Beyond a Single Image

Beyond a Single Image Going Deep: Coaddition Changes in Time: Image Differencing

Image Differencing

Why Difference?

Why Difference?

Why Difference?

Align, Subtract, Profit! Right?...

Alard-Lupton 97 Algorithm Image 1 Image 2 Image 1 Image 2 Φ 1 (x) Φ 2 (x) ⲕ(x) PSF Truth Noise Image Image Difference

Works well but Assumes template PSF is smaller than image PSF Convolve with a PSF when detecting sources so maybe not a problem Assumes the template has no noise Derived from a previous set of observations Recent work has shown that for Gaussian, heteroschedastic noise we can take a Fourier Transform, and compute the log-likelihood and prewhiten the images Zackay et al 2016 David Reiss @ UW developing production code for LSST

Real world is more complicated In real-world applications, differencing two images is never perfect. In fact, it s far from perfect! Typically, the observed difference images are littered with artifacts, false positives, that make astronomers sad and unhappy. Typically, we see 100:1 to 10:1 false-to-true detection ratios (that is not a typo!).

Real world is more complicated Where do these things come from? Image misalignments Imperfect knowledge of PSF variation along the image CCD defects Readout electronics artifacts Optical ghosts and glints http://www.ishootshows.com/2011/07/13/understanding-lens-flare-ghosting/

Machine Learning to the Rescue! Above: Figure 1, Goldstein al. (2015), AJ, 150, 82 Given a sample of real detections and false detections, teach the computer to recognize the difference between the two and judge assign a score, t, to each (t=1 -> real, t = 0 -> false).

Machine Learning to the Rescue! Above: Figure 1, Goldstein al. (2015), AJ, 150, 82 Given a sample of real detections and false detections, teach the computer to recognize the difference between the two and judge assign a score, t, to each (t=1 -> real, t = 0 -> false).

Machine Learning to the Rescue! Above: Figure 1, Goldstein al. (2015), AJ, 150, 82 Given a sample of real detections and false detections, teach the computer to recognize the difference between the two and judge assign a score, t, to each (t=1 -> real, t = 0 -> false).

Option #2: Understand the root cause Astrometric misalignment introduces dipoles in the images, misalignment of >2% of a pixels will dominate the number of false positives Major source is Differential Chromatic Refraction Atmosphere refracts (shifts) a source more for blue light than red (even for light measured through the same filter)

Wavelength Example: differential refraction y(x) S(x) Spatial B(λ) Less atmosphere More atmosphere

DCR-corrected Templates: work by Ian Sullivan, UW LSST Group Assume a model for the image, y, that is made up of series of images each of a different wavelength (ie a hyperspectral cube)

DCR-corrected Templates: work by Ian Sullivan, UW LSST Group Subtract two images Find closest image DCR subtraction A non-negative LS algorithm works but is slow, requires small (50x50 patch solutions) Regularized solution is about 700 hrs (for full LSST image)

Going Deeper: Coaddition (a.k.a. astronomer s HDR )

Why co-add? Pros: See fainter objects! Computationally inexpensive Left: Annis et al. 2011

Why co-add? Pros: See fainter objects! Computationally inexpensive Cons: - Complicated PSF - Correlated noise - Loss of motion and time variability information - Loss of information. Left: Annis et al. 2011

Better than Coaddition: Multi-Epoch Fitting (MultiFit)

Fitting A very simple idea: instead of co-adding pixels of individual observations, and then fitting the model to the result, why not fit the model directly to each individual observation? Transformed Model 1 Exposure 1 Galaxy / Star Models Warp, Convolve Transformed Model 2 Transformed Model 2 Exposure 2 Exposure 3 MultiFit (Simultaneous Multi-Epoch Fitting)

Recovering motion from the noise Lang (2009): Fit moving source models to suspected moving stars in SDSS Stripe 82 survey. Individual exposures: objects are undetected or marginally detected Moving point-source and galaxy models are indistinguishable on the coadd

Downsides Computationally super intensive!!! Scales with the number of epochs (so ~100-1000x more computationally expensive for modern surveys like LSST!). Worse, we re greedy: the physics we re trying to study is so sensitive to biases that ML estimators are not enough; we want posteriors PDFs for parameters of each observed galaxy! (20-200x more output storage!) We re building a ~2 PFLOP machine to do this (cca ~2025). Still, this is all cheaper than building a bigger telescope (and/or launching it into space)!

Where we (know we) could use some help Detection and characterization of diffuse sources Rejection of instrumental artifacts; how do we get better at removing ghosts, glints, flares, etc. Multi-epoch image processing (scene reconstruction) We fit models of individual sources, but not the whole scene. Can we do better? Caveat: We deeply care about the measured object quantities (e.g., fluxes, shapes) Efficient implementations of existing algorithms Probably many other places that we don t know about!