Binary Detection Using Multi-Hypothesis Log- Likelihood, Image Processing

Similar documents
Lecture 9: Speckle Interferometry. Full-Aperture Interferometry. Labeyrie Technique. Knox-Thompson Technique. Bispectrum Technique

AIR FORCE INSTITUTE OF TECHNOLOGY

1. Abstract. 2. Introduction/Problem Statement

Wavefront Sensing using Polarization Shearing Interferometer. A report on the work done for my Ph.D. J.P.Lancelot

A Comparison Between a Non-Linear and a Linear Gaussian Statistical Detector for Detecting Dim Satellites

Response of DIMM turbulence sensor

Contents Preface iii 1 Origins and Manifestations of Speckle 2 Random Phasor Sums 3 First-Order Statistical Properties

AOL Spring Wavefront Sensing. Figure 1: Principle of operation of the Shack-Hartmann wavefront sensor

AIR FORCE INSTITUTE OF TECHNOLOGY

Imaging Geo-synchronous Satellites with the AEOS Telescope

Leveraging Machine Learning for High-Resolution Restoration of Satellite Imagery

Joint R&D and Ops: a Working Paradigm for SSA

Using a Membrane DM to Generate Zernike Modes

A Comparison of Multi-Frame Blind Deconvolution and Speckle Imaging Energy Spectrum Signal-to-Noise Ratios-- Journal Article (Preprint)

International Journal of Scientific & Engineering Research, Volume 4, Issue 7, July ISSN

Optical/IR Observational Astronomy Telescopes I: Optical Principles. David Buckley, SAAO. 24 Feb 2012 NASSP OT1: Telescopes I-1

Static telescope aberration measurement using lucky imaging techniques

Sky demonstration of potential for ground layer adaptive optics correction

Designing a Space Telescope to Image Earth-like Planets

An Example of Telescope Resolution

Image Reconstruction Using Bispectrum Speckle Interferometry: Application and First Results

A NEW METHOD FOR ADAPTIVE OPTICS POINT SPREAD FUNCTION RECONSTRUCTION

What is Image Deblurring?

Optics and Telescopes

Sky Projected Shack-Hartmann Laser Guide Star

AIR FORCE INSTITUTE OF TECHNOLOGY

End-to-end model for the Polychromatic Laser Guide Star project (ELP-OA)

Science Insights: An International Journal

Near Earth Space Object Detection Utilizing Parallax as Multi-hypothesis Test Criterion

A novel laser guide star: Projected Pupil Plane Pattern

A Comparison Between a Non-Linear and a Linear Gaussian Statistical Detector for Detecting Dim Satellites

Design and Correction of optical Systems

Speckles and adaptive optics

Measuring tilt and focus for sodium beacon adaptive optics on the Starfire 3.5 meter telescope -- Conference Proceedings (Preprint)

Phase Retrieval for the Hubble Space Telescope and other Applications Abstract: Introduction: Theory:

Point spread function reconstruction at W.M. Keck Observatory : progress and beyond

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

Lecture 2. September 13, 2018 Coordinates, Telescopes and Observing

How Light Beams Behave. Light and Telescopes Guiding Questions. Telescopes A refracting telescope uses a lens to concentrate incoming light at a focus

AIR FORCE INSTITUTE OF TECHNOLOGY

7. Telescopes: Portals of Discovery Pearson Education Inc., publishing as Addison Wesley

PAPER 338 OPTICAL AND INFRARED ASTRONOMICAL TELESCOPES AND INSTRUMENTS

Basic Theory of Speckle Imaging

Detection of Artificial Satellites in Images Acquired in Track Rate Mode.

Fig. 2 The image will be in focus everywhere. It's size changes based on the position of the focal plane.

IMPROVING BEAM QUALITY NEW TELESCOPE ALIGNMENT PROCEDURE

Using Speckle Interferometry to Resolve Binary Stars

Atmospheric Turbulence Effects Removal on Infrared Sequences Degraded by Local Isoplanatism

n The visual examination of the image of a point source is one of the most basic and important tests that can be performed.

Chapter 5 Telescopes

AIR FORCE RESEARCH LABORATORY Directed Energy Directorate 3550 Aberdeen Ave SE AIR FORCE MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM

Error Budgets, and Introduction to Class Projects. Lecture 6, ASTR 289

OPTICAL PHOTOMETRY. Observational Astronomy (2011) 1

50%-50% Beam Splitters Using Transparent Substrates Coated by Single- or Double-Layer Quarter-Wave Thin Films

Lecture 9: Indirect Imaging 2. Two-Element Interferometer. Van Cittert-Zernike Theorem. Aperture Synthesis Imaging. Outline

Final Announcements. Lecture25 Telescopes. The Bending of Light. Parts of the Human Eye. Reading: Chapter 7. Turn in the homework#6 NOW.

On the FPA infrared camera transfer function calculation

Residual phase variance in partial correction: application to the estimate of the light intensity statistics

Speckle Interferometry

Exoplanets Direct imaging. Direct method of exoplanet detection. Direct imaging: observational challenges

Why Use a Telescope?

A search for binary stars using speckle interferometry

Chapter 6 Lecture. The Cosmic Perspective. Telescopes Portals of Discovery Pearson Education, Inc.

1. Give short answers to the following questions. a. What limits the size of a corrected field of view in AO?

IMPROVING THE DECONVOLUTION METHOD FOR ASTEROID IMAGES: OBSERVING 511 DAVIDA, 52 EUROPA, AND 12 VICTORIA

The IPIE Adaptive Optical System Application For LEO Observations

Orthonormal vector polynomials in a unit circle, Part I: basis set derived from gradients of Zernike polynomials

Imaging through Kolmogorov model of atmospheric turbulence for shearing interferometer wavefront sensor

AIR FORCE INSTITUTE OF TECHNOLOGY

Wavefront errors due to atmospheric turbulence Claire Max

Astronomy 114. Lecture 26: Telescopes. Martin D. Weinberg. UMass/Astronomy Department

WAVEFRONT SENSING FOR ADAPTIVE OPTICS

Space Surveillance using Star Trackers. Part I: Simulations

Phase-Referencing and the Atmosphere

Chapter 6 Lecture. The Cosmic Perspective Seventh Edition. Telescopes Portals of Discovery Pearson Education, Inc.

Astronomical Seeing. Northeast Astro-Imaging Conference. Dr. Gaston Baudat Innovations Foresight, LLC. April 7 & 8, Innovations Foresight

Today. MIT 2.71/2.710 Optics 11/10/04 wk10-b-1

Deformable mirror fitting error by correcting the segmented wavefronts

Exoplanets Direct imaging. Direct method of exoplanet detection. Direct imaging: observational challenges

Technical Note Turbulence/Optical Quick-Reference Table

Techniques for direct imaging of exoplanets

THE DYNAMIC TEST EQUIPMENT FOR THE STAR TRACKERS PROCESSING

Working with Zernike Polynomials

RADAR-OPTICAL OBSERVATION MIX

Adaptive Optics for the Giant Magellan Telescope. Marcos van Dam Flat Wavefronts, Christchurch, New Zealand

Probing the orbital angular momentum of light with a multipoint interferometer

Analysis of Shane Telescope Aberration and After Collimation

Properties of Thermal Radiation

ASTRONOMICAL SPECKLE INTERFEROMETRY. Review lecture by Y.Balega from the Special Astrophysical Observatory, Zelentchuk, Russia (preliminary version)

1 Lecture, 2 September 1999

Shadow Imaging of Geosynchronous Satellites

Determining absolute orientation of a phone by imaging celestial bodies

AIR FORCE INSTITUTE OF TECHNOLOGY

Astronomy. Optics and Telescopes

Atmospheric Turbulence and its Influence on Adaptive Optics. Mike Campbell 23rd March 2009

1. INTRODUCTION ABSTRACT

High (Angular) Resolution Astronomy

Transiting Exoplanet in the Near Infra-red for the XO-3 System

Astr 2310 Thurs. March 3, 2016 Today s Topics

PS210 - Optical Techniques. Section VI

Transcription:

Air Force Institute of Technology AFIT Scholar Theses and Dissertations 3-14-2014 Binary Detection Using Multi-Hypothesis Log- Likelihood, Image Processing Brent H. Gessel Follow this and additional works at: https://scholar.afit.edu/etd Recommended Citation Gessel, Brent H., "Binary Detection Using Multi-Hypothesis Log-Likelihood, Image Processing" (2014). Theses and Dissertations. 604. https://scholar.afit.edu/etd/604 This Thesis is brought to you for free and open access by AFIT Scholar. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of AFIT Scholar. For more information, please contact richard.mansfield@afit.edu.

BINARY DETECTION USING MULTI-HYPOTHESIS LOG-LIKELIHOOD, IMAGE PROCESSING THESIS Brent H. Gessel, Captain, USAF AFIT-ENG-14-M-34 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED

The views expressed in this thesis are those of the author and do not reflect the official policy or position of the United States Air Force, the Department of Defense, or the United States Government. This material is declared a work of the U.S. Government and is not subject to copyright protection in the United States.

AFIT-ENG-14-M-34 BINARY DETECTION USING MULTI-HYPOTHESIS LOG-LIKELIHOOD, IMAGE PROCESSING THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command in Partial Fulfillment of the Requirements for the Degree of Master of Science in Electrical Engineering Brent H. Gessel, B.S.E.E. Captain, USAF March 2014 DISTRIBUTION STATEMENT A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED

AFIT-ENG-14-M-34 BINARY DETECTION USING MULTI-HYPOTHESIS LOG-LIKELIHOOD, IMAGE PROCESSING Brent H. Gessel, B.S.E.E. Captain, USAF Approved: //signed// Stephen C. Cain, PhD (Chairman) Date 20 Feb 2014 //signed// Keith T. Knox, PhD (Member) Date 13 Feb 2014 //signed// Mark E. Oxley, PhD (Member) Date 20 Feb 2014

AFIT-ENG-14-M-34 Abstract One of the United States Air Force missions is to track space objects. Finding planets, stars, and other natural and synthetic objects are all impacted by how well the tools of measurement can distinguish between these objects when they are in close proximity. In astronomy, the term binary commonly refers to two closely spaced objects. Splitting a binary occurs when two objects are successfully detected. The physics of light, atmospheric distortion, and measurement imperfections can make binary detection a challenge. Binary detection using various post processing techniques can significantly increase the probability of detection. This paper explores the potential of using a multi-hypothesis approach. Each hypothesis assumes one two or no points exists in a given image. The loglikelihood of each hypothesis are compared to obtain detection results. Both simulated and measured data are used to demonstrate performance with various amounts of atmosphere, and signal to noise ratios. Initial results show a significant improvement when compared to a detection via imaging by correlation. More work exists to compare this technique to other binary detection algorithms and to explore cluster detection. iv

Table of Contents Abstract......................................... Page iv Table of Contents.................................... List of Figures...................................... v vii List of Tables...................................... x List of Acronyms.................................... xi I. Introduction..................................... 1 1.1 Binary detection................................ 1 1.2 Space situational awareness.......................... 2 1.3 Research objectives.............................. 3 1.4 Organization.................................. 3 II. Background..................................... 5 2.1 Post-Process Imaging............................. 8 2.1.1 Atmospheric Turbulence....................... 8 2.1.2 Deconvolution............................ 11 2.1.2.1 Knox-Thompson...................... 16 2.1.2.2 Bispectrum......................... 19 2.1.3 Imaging by Correlation........................ 22 2.1.4 Binary detection by post-process image reconstruction summary.. 26 2.2 Multi-Hypothesis Detection.......................... 26 2.3 Chapter Summary............................... 28 III. Methodology.................................... 29 3.1 Derivation of multi-hypothesis algorithms.................. 30 3.1.1 Zero point source derivation..................... 30 3.1.2 Single point source derivation.................... 31 3.1.3 Two point source derivation..................... 34 3.2 Software implementation........................... 38 3.2.1 Point source threshold........................ 38 3.2.2 zero point source implementation.................. 39 v

Page 3.2.3 Single point source implementation................. 39 3.2.4 Two point source implementation.................. 40 3.2.5 Decision process........................... 41 3.3 Simulation test model............................. 43 3.3.1 Modeling atmospheric effects.................... 43 3.3.2 Test variables............................. 44 3.3.3 Performance measurements for simulated data........... 45 3.3.4 Simulated comparison model..................... 45 3.4 Measured data test model........................... 46 3.4.1 Description of measured data..................... 46 3.4.2 Deriving a valid PSF......................... 47 3.4.3 Determining success of measured data test............. 47 IV. Results........................................ 48 4.1 Multi-hypothesis binary detection simulation results............. 48 4.1.1 Image generation........................... 50 4.1.2 Log-likelihood bias correction.................... 53 4.1.3 Probability of Detection (P D ) sample results............. 53 4.1.4 Probability of false alarm (P f a ) simulation results.......... 56 4.1.5 P D and P f a result summary...................... 58 4.2 Comparison with imaging by correlation technique............. 58 4.2.1 Probability of False Alarm (P f a ) comparison results........ 60 4.2.2 Probability of Detection (P D ) comparison results.......... 62 4.2.3 Comparison result summary..................... 64 4.3 Measured data processing results....................... 64 4.3.1 First image: two points spaced far away............... 64 4.3.2 Second image: two points in close proximity............ 66 4.3.3 Third image: two points touching.................. 68 4.3.4 Fourth image: single point...................... 70 4.3.5 Measured data result summary.................... 72 V. Future work..................................... 73 5.1 Summary................................... 73 5.2 Future work.................................. 73 Bibliography...................................... 75 vi

List of Figures Figure Page 2.1 Basic adaptive optic process............................ 7 2.2 Speckle interferometry simulation with 200 independent frames. (a) The binary source image, (b) the simulated average of short exposure images, (c) speckle transfer function, (d) speckle transfer function showing fringe spacing. 14 2.3 Stack of 50 short exposure images, each simulated through a unique random phase screen with r o = 30 cm........................... 19 2.4 Result of cross spectrum method with r o = 30cm, original image (a), cross section of original image (b), reconstructed image (c), cross section of reconstructed image (d).............................. 20 2.5 Results with different numbers of iterations: (a) 1, (b) 5, (c) 10, (d) 15, (e) 80, and (f) 100 iterations................................ 24 2.6 Results of the correlation technique: (a) original image, (b) original image cross section, (c) reconstructed image, and (d) reconstructed image cross section. 25 2.7 Overview of binary hypothesis method. Chapter 3 will work through derivations of each step............................... 27 4.1 Sample simulated detected images of binary source with 500 photons for each point. Samples taken for various background noise and D/r 0 values as given next to each image. Zoomed in to 31x31 pixels.................. 51 4.2 Sample simulated detected images of single point sources used in calculating false alarm rates. Samples taken for various background noise and D/r 0 values as given next to each image. Zoomed in to a 31x31 pixel area.......... 52 vii

Figure Page 4.3 False Alarm rate versus D/r 0 for a point source with an intensity of 1000 photons for, (a) background noise level=1, (b) background noise level=2, (c) background noise level=3............................. 60 4.4 False Alarm rate versus D/r 0 for a point source with an intensity of 1500 photons for, (a) background noise level=1, (b) background noise level=2, (c) background noise level=3............................. 61 4.5 False Alarm rate versus D/r 0 for a point source with an intensity of 2000 photons for, (a) background noise level=1, (b) background noise level=2, (c) background noise level=3............................. 61 4.6 Detection rate versus D/r 0 for a binary source with an intensities of 500 and 500 photons for, (a) background noise level=1, (b) background noise level=2, (c) background noise level=3........................... 62 4.7 Detection rate versus D/r 0 for a binary source with an intensities of 1000 and 500 photons for, (a) background noise level=1, (b) background noise level=2, (c) background noise level=3........................... 63 4.8 Detection rate versus D/r 0 for a binary source with an intensities of 1000 and 1000 photons for, (a) background noise level=1, (b) background noise level=2, (c) background noise level=3........................... 63 4.9 First measured image (a) Detected image, (b) Result of hypothesis one, (c) Result of hypothesis two. The log-likelihood of hypothesis two was larger and therefore a binary was detected.......................... 65 4.10 First measured image (a) Detected image, (b) Result of hypothesis one, (c) Result of hypothesis two. The log-likelihood of hypothesis two was larger and therefore a binary was detected.......................... 67 viii

Figure Page 4.11 Third measured image (a) Detected image, (b) Result of hypothesis one, (c) Result of hypothesis two. The log-likelihood of hypothesis two was larger and therefore a binary was detected.......................... 69 4.12 Forth measured image (a) Detected image, (b) Result of hypothesis one, (c) Result of hypothesis two. The log-likelihood of hypothesis one was greater and so a binary was not detected.......................... 71 ix

List of Tables Table Page 2.1 First 10 Zernike Circular Polynomials [33].................... 10 3.1 Common symbols................................. 29 4.1 Probability of Detection (P D ) results using binary source with 500/500 photons. 54 4.2 Probability of Detection (P D ) results using binary source with 1000/500 photons. 55 4.3 Probability of Detection (P D ) results using binary source with 1000/1000 photons....................................... 55 4.4 Probability of False Alarm (P f a ) results using point source with 1000 photons.. 56 4.5 Probability of False Alarm (P f a ) results using point source with 1500 photons.. 57 4.6 Probability of False Alarm (P f a ) results using point source with 2000 photons.. 57 x

List of Acronyms Acronym Definition CCD Charge-Coupled Device P D Probability of Detection P f a Probability of False Alarm PSF Point Spread Function OTF Optical Transfer Function FT Fourier Transform PSD Power Spectral Density LiDAR Light Detection And Ranging PMF Probability Mass Function SST Space Surveillance Telescope DARPA Defense Advanced Research Projects Agency GEO Geostationary Earth Orbit xi

BINARY DETECTION USING MULTI-HYPOTHESIS LOG-LIKELIHOOD, IMAGE PROCESSING I. Introduction Comparing statistical hypotheses to determine the likelihood of a given event is a proven technique used in many fields, particularly digital communication. The application of a multi-hypothesis test algorithm to the detection of binary stars or other space objects is an area of new exploration. This thesis will explore the usefulness of using a multi-hypothesis technique to resolve close binary objects in space. The first task is to derive a multi-hypothesis algorithm specifically for binary detection and then provide results for various simulated imaging conditions as well as measured binary images. Simulated results will be compared with another technique to explore potential detection improvements. 1.1 Binary detection The ability to discriminate between closely-spaced objects in space has and continues to be a challenge. Finding planets, stars, and other natural celestial objects, as well as trying to keep track of satellites and space debris are all impacted by how well the tools of measurement can distinguish between these objects in close proximity. In astronomy, the term binary commonly refers to two stars in the same system. Splitting a binary occurs when two stars are successfully identified. Some binaries are easily split using basic equipment. As the source intensity, object distance, and/or atmospheric distortion varies, the binaries will appear as just a single object. In this paper, the term binary refers to any two closely spaced objects. Several image post processing techniques have been developed 1

to increase spatial resolution and/or look for patterns common to binary objects. Most of the commonly used methods today focus on image reconstruction and deblurring. This thesis will show that if an image s Point Spread Function (PSF) is known and the source is a single or double point source, the log-likelihood of the most probable hypothesis provides a statistical comparison of how likely a blurred image contains a binary. 1.2 Space situational awareness One of the United States Air Force missions is to track space objects, particularly of the synthetic kind. Due to the physics of spatial resolution, it is extremely difficult to resolve satellites in geosynchronous orbit (35,786 km). At this distance the size of a satellite is typically smaller than one pixel on a high quality Charge-Coupled Device (CCD) camera. For example, if a satellite in geosynchronous orbit is 15 meters wide (the length of a full size school bus) and a telescope with an aperture of 4 meters and focal length of 64 meters is focused on it, the size on the CCD would be: size = (sat.size)( f ocal.length) distance = (15m)(64m) 35, 786, 000m = 2.64µm. (1.1) This is much smaller than a typical CCD pixel used in astronomical telescopes. This is also smaller than the Raleigh criteria for spatial resolution for this same setup [14]: Resolution (1.22)(λ)( f /#) = (1.22)(550nm)(16) = 8.48µm (1.2) where λ is the wavelength of light and f /# is the f-number, which is the focal length divided by the diameter of the optical system. The atmosphere will further blur this image and after a certain amount of exposure time the image will typically be a few to several pixels of light. At this distance, it can be extremely difficult to tell if there is one, two or several objects in the spot of light. The multi-hypothesis method discussed in this thesis has the potential to increase the probability of correctly detecting binaries at geosynchronous orbit and other scenarios important to the USAF. 2

1.3 Research objectives The question posed in this thesis is how well, if at all, can a multi-hypothesis model correctly detect a binary pair in a blurred image? To answer that question four research objectives are documented. First, a statistical model is derived in Section 3.1. Here the derivation of algorithms for zero, one, and two point sources are shown. The second objective is to successfully implement the mathematical algorithms into a software simulation model. This model includes atmospheric effects, background noise, and creates a test environment to work out threshold parameters that maximize the probability of detection while minimizing the false alarm rate. The simulation will be done in Matlab and is covered in Section 3.2. Once the binary hypothesis method is successfully implemented and results measured, it is important to compare them to another modern technique. The third objective is to compare results from another image detection method, specifically imaging by correlation with a threshold set for binary detection. Section 3.3 walks through an implementation of imaging by correlation in Matlab and results are provided in Chapter 4. The final objective is to use measured imagery data where the number of objects is known and see how well the algorithm correctly detects when a binary is present. The imagery data used is focused on a satellite in geosynchronous orbit with a dim star passing by at various distances, including the situation where the two appear as a single object. The decision to select a binary from the algorithm is compared against the truth data and presented in Chapter 4. 1.4 Organization This research document is organized in accordance with AFIT s thesis guidelines. Chapter 2 will discuss current methods used to detect binaries and contrast them with the proposed, unpublished, multi-hypothesis binary detection method. Chapter 3 provides the 3

derivation and simulation methods used to meet the thesis objectives outlined in the previous section. Chapter 4 contains the results of several of the simulation tests as well as the results of processing measured data. Chapter 5 discusses conclusions and opportunities for continued research and operational testing. Complete references of all sources cited are contained in the bibliography. Every attempt has been made to use consistent variables to enhance readability. 4

II. Background High spatial resolution of an imaging system is key to achieving detection of binary objects. A standard telescope system cannot achieve diffraction-limited resolution due to factors such as atmospheric and optical aberration. Several techniques have been developed to close the gap between diffraction-limited resolution and a system s actual resolution. This chapter will discuss some of the most common post-processing techniques to increase spatial resolution in astronomical imaging. Additionally, a brief overview of multi-hypothesis statistics is provided. Binary objects do not all behave in the same way. Although the imaging techniques discussed in this chapter can be used on all binaries, specific methods have been developed to look for planets and binary star systems. As gravitational fields of large planets and stars interact, a periodic movement, often referred to as wobble, can be detected [23]. Other indicators such as periodic eclipsing and Doppler-like wavelength patterns can infer the presence of a binary [6]. These methods have been successful on a subset of binaries but cannot be applied to binaries in general. This paper will not focus on gravitational, orbital or other spectroscopic measurements techniques, rather the focus will be on postprocessing techniques useful in detecting any binary. Aside from the natural effect of diffraction, the earth s atmosphere plays a large role in limiting spatial resolution. Virtually all real-time and post-processing de-blurring techniques require an understanding of how the atmosphere is changing the light. Because we know what a diffraction-limited point source looks like, we can compare it to a measured point source through the same atmosphere and use that information to correct the atmospheric distortion [14][12]. Lightwaves from distant point source(s) can be estimated as plane waves right before reaching earth s atmosphere [14]. The atmosphere will randomly distort this wave and the distortion can be measured by a wavefront sensor 5

[33]. Unfortunately, it is impossible to have a perfect natural point source everywhere in the night sky at visible wavelengths [11][37][38][39]. To overcome this limitation, one or more lasers can be used to create point sources, or guide beacons, in the area of interest [7][9][10][11][15][37]. A wavefront sensor takes the point source, or guide star, information and determines corrective adjustments then sends that data to an adaptive optics system [32]. Real world performance of adaptive optics can vary from near diffraction limited correction to no visible improvement depending on a host of factors [1][8][16][17][28][36][40]. The need for a valid point source cannot be understated since it is foundational to adaptive optics and the post-processing techniques described in this chapter. A great deal of research still remains in the field of generating and measuring quality point sources. Although adaptive optics is an important technique in moving closer to diffraction limited imaging, it is not currently a practical solution for all imaging sites. Having one or more image post-processing software solutions is a relatively affordable way to augment or supplement adaptive optic systems. It is the area of software post-processing that this paper will focus from this point on. The following sections will discuss two of the most common post-processing methods for binary detection, deconvolution and speckle interferometry. Section 2.2 finishes the chapter with a brief and general look at how multi-hypothesis statistics can be used in binary detection. 6

POINT SOURCE LIGHTWAVES RANDOM ATMOSPHERIC TURBULENCE TELESCOPE BEAM SPLITTER CCD ADAPTIVE MIRROR WAVEFRONT SENSOR MIRROR CONTROL Figure 2.1: Basic adaptive optic process. 7

2.1 Post-Process Imaging Image post-processing can be an effective and affordable way to increase image resolution. This section will look at two deconvolution methods and a speckle interferometry technique useful for visual binary detection. These processes attempt to reconstruct a higher resolution image from measured data. It is important to note that the multi-hypothesis method does not produce a reconstructed image so it is a fundamentally different approach but still fits within the post-processing category of binary detection. 2.1.1 Atmospheric Turbulence. Before discussing specific imaging techniques it is important to explain how atmospheric turbulence is modeled in this thesis. The most common methods utilize the approximation that atmospheric effects can be represented as wavefront errors in the pupil plane. If A(u, v) represents the two-dimensional, clear pupil and W(u, v) represents wavefront error as a function of position with respect to a fixed Cartesian coordinate system, (u, v), then the atmospherically blurred image, i(x, y), can be represented as: i(x, y) = F { A(u, v)e jw(u,v)} 2 (2.1) where F is the symbol denoting the Fourier Transform. The wavefront error as a function of position, commonly called a phase screen, can be simulated in various ways. One of the most common methods of phase screen generation utilizes Power Spectral Density (PSD) models, such as von Kármán and modified von Kármán [31]. These are referred to as Fourier Transform (FT) based methods [31]. The model used in the simulations of this paper are based upon another method that uses Zernike polynomials to generate phase screens. It has been shown that the wavefront error, W(u, v), can be expanded with basis functions based on the geometry of the aperture [33]. Common basis functions include, Zernike circular, Zernike annular, Gaussian-circular, and Gaussian-annular. A Zernike circular set of basis functions were used in this paper to match the geometry of the 8

simulated aperture. Expansion of the wavefront error, or phase screen, using Zernike circular polynomials is described below. Note the shift to polar coordinates where, ρ = u 2 + v 2 and θ =arctangent(u, v), also note u and v represent grid locations in the pupil plane. Zernike expansion equations are as follows: W(u, v) = W z (ρ, θ) = α i Z i (ρ, θ) 2(n + 1)R m n (ρ)g m (θ) if m 0 Z i (ρ, θ) = R 0 n(ρ) if m = 0 (n m)/2 R m ( 1) s (n s)! n (ρ) = s!( n+m s)!( n m s=0 2 sin(mθ) if i odd G m (θ) = cos(mθ) if i even i 2 )!ρn 2s (2.2) where combinations of the index variables m and n will produce a specific aberration effect, also the index i is a numerical index. Table 2.1 below shows the mapping of the first 10 Zernike circular polynomials and the index mapping of m, n, and the numerical index, i. The wavefront error, W(ρ, θ) can be measured or simulated. It is worth repeating that the wavefront error is using Zernike polynomials: W z (ρ, θ) = α i Z i (ρ, θ). (2.3) To simulate phase screens we need to generate the coefficients, α i, to weight each polynomial at a given polar coordinate, ρ, θ. To do this the work of Roddier was utilized, who demonstrated that by using a Cholesky decomposition of the covariance matrix of the Zernike coefficients, statistically accurate atmospheric phase screens can be generated [30]. The following discussion will provide a basic explanation of this method. First we note that α in Equation 2.3 will be an N 1 vector where N is the number of Zernike polynomials used to form the basis. Given the covariance matrix, C i, j, for two Zernike polynomials and associated amplitudes, α i and α j : i 9

Table 2.1: First 10 Zernike Circular Polynomials [33]. i m n Zn m (ρ, θ) Name 1 0 0 1 piston 2 1 1 2ρ cos(θ) x tilt 3 1 1 2ρ sin(θ) y tilt 4 0 2 5 2 2 6 2 2 7 1 3 8 1 3 9 3 3 10 3 3 11 0 4 3(2ρ 2 1) 6ρ 2 sin(2θ) 6ρ 2 cos(2θ) 8(3ρ 3 2ρ) sin(θ) 8(3ρ 3 2ρ) cos(θ) 8ρ 3 sin(3θ) 8ρ 3 cos(3θ) 5(6ρ 4 6ρ 2 + 1) defocus y primary astigmatism x primary astigmatism y primary coma x primary coma y primary trefoil x primary trefoil x primary spherical C i, j = E[α i, α j ] (2.4) C = LL T (2.5) where L T denotes the conjugate transpose of the lower triangular matrix L. The covariance matrix generated from two Zernike polynomials Z i and Z j has been derived by Knoll [27][30]: C i, j = E[α i, α j ] = where: K Zi Z j δ Z Γ[(n i + n j 5/3)/2](D/r 0 ) 5/3 Γ[(n i n j 17/3)/2]Γ[(n i n j 17/3)2]Γ[(n i n j 23/3)/2] (2.6) K Zi Z j = Γ(14/3)[(24/5)Γ(6/5)]5/6 [Γ(11/6)] 2 2π 2 ( 1) (n i+n j 2m i )/2 (n i + 1)(n j + 1) (2.7) and: δ Z = [(m i = m j )] [parity(i, j) (m i = 0)]. (2.8) 10

If we generate a vector of zero mean with unit variance uncorrelated numbers, n, then we can solve for the amplitudes, α, that weight each Zernike polynomial by applying the properties of the Cholesky decomposition such that: L = C 1 2 (2.9) and: α = Ln. (2.10) Thus, given a randomly generated zero mean, unit variance vector n, the Freid seeing parameter, r 0, the diameter of the aperture, D, and the number of Zernike polynomials desired, a wavefront error phase screen can be calculated. Again, this is the method used in all simulation conducted as part of this thesis. For more information on this method please refer to [30]. 2.1.2 Deconvolution. Deconvolution is a de-blurring technique widely used in many fields including astrophotography. Basically, if an image is distorted with spatially invariant blur, e.g. the same atmospheric distortion is applied to the entire image, it can be modeled as the convolution of the measured point spread function and the true image [19]. Or, using the Convolution Theorem, the Fourier Transform of the image is equal to the Fourier Transform of the object multiplied by the Optical Transfer Function (OTF): i(x) = o(x) h(x) F {i(x)} = F {o(x)} F {h(x)}. (2.11) In this equation, is the convolution operation. Typically, the only information known is the blurred image and an imperfect point spread function this is known as blind deconvolution [19]. By using multiple frames with their respective measured point spread functions the number of solutions to the blind deconvolution can be reduced [35]. This is known as multiframe blind deconvolution and is an important technique used for image 11

restoration [5][35]. In 1970, Antoine Labeyrie observed that the speckles in a short exposure image contained more spatial frequency data when compared to a long exposure image [20]. The processes that use this speckle information to reconstruct an image is referred to as speckle imaging. There are two main steps to speckle imaging. First, to estimate the object and reference star intensity and second, to recover the phase that is lost from the first step [3]. Step one will be described in this section and two methods of phase recovery will be covered in the next two subsections. Speckle interferometry is a technique used to find the expected value of the modulus of the Fourier Transform of the object. If the source object happens to be two points the cosine fringe patterns can be seen (see Figure 2.2). If α s represents the angular separation of a binary pair and, λ D α s λ r 0, where λ D is the approximate smallest angular separation two points are detectable in a diffraction limited environment and λ r 0 is the smallest approximate angular separation two points can be detected through atmospheric turbulence then speckle interferometry can be useful [31], where r 0 is the Fried s seeing parameter and λ is the wavelength of light. If α s < λ, then D the angular separation is too small to resolve. If α s > λ r 0, then speckle interferometry will not improve the resolution. It is therefore assumed going forward that the binary separation angle, α s, falls within the range above, where deconvolution is helpful. First consider the irradiance incident on a detector. If the imaging system is properly focused on the object, we have the incident irradiance equal to the object irradiance as observed from geometry alone convolved with the PSF: d(x) = h(x y)o(y) (2.12) y where d(x) is a single measured short exposure image, h(y) is the PSF, and o(y) is the diffraction-limited object irradiance. If the source object is a binary, let o(y) = 12

o 1 δ(y) + o 2 δ(y y 1 ), where δ is the Dirac function and o 1 and o 2 are the intensities of the binary points. Taking the convolution and applying the sifting property of integrating a Dirac function yields: d(x) = o 1 h(x y)δ(y) + o 2 h(x y)δ(y y 1 ) y y = o 1 h(x) + o 2 h(x y 1 ). (2.13) Now take the Fourier transform of d(x): F {d(x)} = o 1 H(f) + o 2 H(f)e j2πy 1 f x. (2.14) Here we take the modulus squared of the result: F {d(x)} 2 = ( ) ( ) O 1 H(f) + O 2 H(f)e j2πy 1 f x O1 H(f) + O 2 H(f) e j2πy 1 f x = O 2 1 H(f) 2 + O 2 2 H(f) 2 + 2 REAL { } (2.15) O 1 O 2 H(f) 2 e j2πy 1 f x noting that REAL { } e j2πy 1 f x = cos (2πy1 f x ) reducing Equation 2.15 to: F {d(x)} 2 = O 2 1 H(f) 2 + O 2 2 H(f) 2 + 2O 1 O 2 H(f) 2 cos(2πy 1 f x ) (2.16) divide both sides by H(f) 2, yields: F {d(x)} 2 H(f) 2 = O 2 1 + O2 2 + 2O 1O 2 cos(2πy 1 f x ). (2.17) Let Q(f) = F {d(x)} 2 K, where K is the photon noise bias governed by Poisson statistics and Q(f) is the unbiased speckle interferometry estimator [13]. The signal-to-noise ratio, S NR Q, of Q(f) improves as follows: S NR N Q (f) = N S NR Q (f) (2.18) where S NR N Q is the signal-to-noise ratio of N averaged independent realizations of Q(f) [31]. Values of N, the number of short exposure images, range from a few hundred to 13

several thousand [31]. Assuming N > 1, it is necessary to take the expected value of both the numerator and denominator of Equation 2.17: E{ F {i(x)} 2 } E{ H(f) 2 } = O 2 1 + O2 2 + 2O 1O 2 cos(2πy 1 f x ) (2.19) where i(x) is the detection plane irradiance of N images, H(f) is the OTF and y 1 is the separation of the binary stars, O 1 (f) and O 2 (f) is the image spectrum of the binary stars. Plotting the results of Equation 2.19 can reveal a cosine pattern if the angular separation of the binary pair is large enough [14][20][31]. The following is a simulated example of speckle interferometry using 200 independent images of two point sources. 105 (a) (b) 110 115 50 120 125 130 135 P1:124,128 P2:132,128 100 150 140 200 145 150 110 120 130 140 150 250 50 100 150 200 250 (c) (d) 50 50 100 100 113,128 145,128 150 150 200 200 250 250 50 100 150 200 250 50 100 150 200 250 Figure 2.2: Speckle interferometry simulation with 200 independent frames. (a) The binary source image, (b) the simulated average of short exposure images, (c) speckle transfer function, (d) speckle transfer function showing fringe spacing. In Figure 2.2 (a) the binary source image is shown, note the separation is 8 pixels; (b) is the simulated average of 200 short exposure images; (c) is the calculated unbiased speckle 14

interferometry estimator, Q(f) (log scale); finally, (d) shows the log scale of Q(f) E{ H(f) 2 }. The binary separation from the original image, y 1 can be calculated from the results. First looking at Equation 2.19, the period of the fringe pattern detected depends on cos(2πy 1 f x ). Where y 1 is a 2-tuple and denotes the location of the second binary point in the image plane and f x is the size of a pixel in the frequency plane. The peak-to-peak period in pixel count of the cosine fringes in Figure 2.2 (d) is 32: P = Period = f x = 1 N 1 f requency = 1 y 1 f x (2.20) where N = 256, the number of pixels and f x is the sample size in the frequency plane. P = Period = 256 y 1 from measurements, P=32 pixels : 32 = 256 y 1 y 1 = 256 32 = 8 pixels. (2.21) Looking at the simulation parameters, 8 pixels was the binary separation used to generate the image. The actual physical interpretation of 8 pixels of separation will depend on the characteristics of the imaging system and distance to the object being viewed. Thus, by measuring the pixel separation of the binary fringe pattern an estimation can be made as to the actual binary separation in the object plane. Observing these cosine fringe patterns is a proven method of finding binary point sources. However, as mentioned before, the phase data is lost after taking the second moment of the image spectrum. This phase data needs to be recovered for image reconstruction. The next two subsections will discuss two common methods of phase retrieval. 15

2.1.2.1 Knox-Thompson. As stated before, to properly reconstruct an image using spectral imaging the phase of the source object needs to be recovered. The first technique commonly used is the Knox- Thompson or cross spectrum method. Dr. K. T. Knox and B. J. Thompson published a paper in the Astrophysics Journal in 1974 describing a method of recovering images from atmospherically-degraded short exposure images [18][31]. This method is now called the Knox-Thompson Technique or the cross spectrum technique. In their paper, they defined the cross spectrum, C(f, f), as: C(f, f) = I(f)I (f + f) (2.22) where I(f) = O(f)H(f) and O(f) is the object spectrum, and H(f) is the OTF [31][18]. The cross spectrum of the detected image is not directly proportional to the cross spectrum of the object as a bias term needs to be accounted for to properly estimate the phase [2][4][31]. If we assume individual pixels in the detection plane are statistically independent and the photon arrival is governed by Poisson statistics, then the unbiased cross spectrum for a single measured image, d(x), can be written as [31]: C u (f, f) = D(f)D (f + f) D ( f). (2.23) The term D ( f) is the conjugate of the image spectrum at f, and is defined by: D ( f) = d(x)e j2π f x (2.24) x which will be different from image to image and needs to be subtracted out before taking the average of the short exposure images. Each image also needs to be centered as the cross spectrum method is not shift invariant [2][4][18][31]. Typical values of f = ( f 1, f 2 ), the spatial frequency offset, are, f < r 0 /(λd), where r 0 is the Fried seeing parameter, λ is the wavelength of the light and d is the distance 16

from the pupil plane to the imaging plane [2][31]. Taking the average cross spectrum over multiple short exposure images yields the following equation [2][31]: E[C(f, f)] = O(f) O(f + f) e j[φ o(f) φ o (f+ f)] E[H(f)H (f + f)] (2.25) where second moment of the OTF, E[H(f)H (f+ f)], is the cross spectrum transfer function and relates the object spectrum, O(f) to the cross spectrum. The cross spectrum transfer function is real-valued, so the phase of the average cross spectrum is [2][4][31]: φ C (f, f) = φ o (f) φ o (f + f). (2.26) The object phase, φ o can be extracted from this equation. Let the offset vector in the x direction be f x and the offset vector in the y direction be f y, that is f = ( f x, f y ). The phase differences generated by these offset vectors are [2][31]: φ x ( f x, f y ) = φ o ( f x, f y ) φ o ( f x + f x, f y ) (2.27) φ o(f) f x f x (2.28) φ y ( f x, f y ) = φ o ( f x, f y ) φ o ( f x, f y + f y ) (2.29) φ o(f) f y f y (2.30) The partial derivatives form the orthogonal components of the gradient of the object phase spectrum, φ o (f). This angle data can be combined with the magnitude data retrieved from speckle interferometry methods to reconstruct the image [2][31]. Calculating φ o (f) can be accomplished by the following equation: N x 1 φ o (N x f x, N y f y ) = i=0 φ x (i f x, 0) + N y 1 j=0 φ y (0, j f y ) (2.31) where N x and N y are the number of pixels in the x and y direction in the image plane. For the simulations in this paper, f x = f y = 1, which provides a small offset constant without doing sub-pixel manipulations. Looking at Equation 2.31, each point in the reconstructed 17

object phase, φ o, can be obtained by taking the angle of the average cross spectrum in both x and y directions and then summing along the x and y axis to the desired phase coordinate to reconstruct [31]. Many summing paths can be taken to get to a particular phase coordinate. In a noise free environment all paths to a particular point will yield the same result. In a real-world system, each path will yield slightly different results depending on random noise effects. It is, therefore, a standard practice to calculate the object phase at a particular point by averaging the results of summing several paths to that point [2][31]. The result of implementing Equation 2.31 is an unwrapped phase 2D-matrix containing the reconstructed object phase in the Fourier domain. To get the reconstructed image, φ o needs to be wrapped and then multiplied by the Fourier transform of the intensity information obtained through, in this case, speckle interferometry. The following is a simulated example of a basic implementation of the Knox-Thompson or cross spectrum method. The reconstructed image is formed from the following equation: o(x, y) = F 1 { O e jφ o } (2.32) where O is the modulus of the average object intensities calculated using speckle interferometry and φ o is the object phase recovered by the cross spectrum method and o(x, y) is the reconstructed object. Figure 2.3 shows the sum of 50 short exposure images of a binary point source with a separation of 4 pixels on a 255 pixel square grid. Each image passed through a randomly generated phase screen with an r o value of 30 cm to simulate atmospheric blur. Poisson noise was added to the data calculated at the image plane. A focused telescope with a square aperture of 1 meter was used for this simulation. Figure 2.4 is the result of my implementation of the cross spectrum method described in this section when applied to the image data from Figure 2.3. 18

Figure 2.3: Stack of 50 short exposure images, each simulated through a unique random phase screen with r o = 30 cm. Implementing a robust cross spectrum phase retrieval algorithm requires extensive fine tuning to remove as much noise as possible. Please refer to Ayers work for more information on implementation [2]. Reconstructing a higher resolution image is typically what is desired in astronomical imaging. A major difference between the multi-hypothesis method and phase reconstruction is the focus on detecting binaries versus producing higher resolution images. One other phase reconstruction method should be mentioned and that is bispectrum technique. 2.1.2.2 Bispectrum. The bispectrum is another effective method used to reconstruct the phase of an image. It is invariant to image shift, which is a valuable property when looking at multiple images of potential binaries [2][22]. It is defined as [31]: B(f 1, f 2 ) = D(f 1 )D(f 2 )D (f 1 + f 2 ). (2.33) Note the phase of the object spectrum is contained in the phase of the bispectrum at three points in frequency space (f 1, f 2, and f 1 + f 2 ) compared to the cross spectrum which needs two points to reconstruct the object phase. Many different techniques have been and 19

Figure 2.4: Result of cross spectrum method with r o = 30cm, original image (a), cross section of original image (b), reconstructed image (c), cross section of reconstructed image (d). are continued to be published on how best to calculate the phase using the bispectrum [2][21][24][25][26]. I will highlight one such method which is the unit amplitude phasor recursive reconstructor. To begin, the unbiased bispectrum for a single short exposure image is: B u (f 1, f 2, ) = D(f 1 )D(f 2 )D (f 1 + f 2 ) D(f 1 ) 2 D(f 2 ) 2 D(f 1 + f 2 ) 2 2K + 3Pσ 2 n (2.34) 20

where K is the bias caused by the random arrival of photons governed by Poisson statistics and Pσ 2 n represents additive noise caused by the imaging device [31]. The unbiased bispectrum is calculated for each short exposure image and then the average bispectrum is computed. The phase of the resulting mean bispectrum is: φ B (f 1, f 2 ) (2.35) which is equal to [2][22][31]: φ B (f 1, f 2 ) = φ O (f 1 ) + φ O (f 2 ) φ O (f 1 + f 2 ). (2.36) Given we have calculated the bispectrum phase, φ B (f 1, f 2 ), we need to know two other values of the object phase spectrum to then iteratively calculate all other remaining values for the object phase. A typical approach is to set: φ O (0, 0) = φ O (1, 0) = φ O ( 1, 0) = φ O (0, 1) = φ O (0, 1) = 0. (2.37) Thus: φ O ((0, 0) + (0, 0)) = φ B ((0, 0), (0, 0)) φ O ((1, 0) + (0, 0)) = φ B ((1, 0), (0, 0)) φ O (( 1, 0) + (0, 0)) = φ B (( 1, 0), (0, 0)) (2.38) φ O ((0, 0) + (0, 1)) = φ B ((0, 0), (0, 1)) φ O ((0, 0) + (0, 1)) = φ B ((0, 0), (0, 1)). Much like the cross spectrum, many different combinations of known values can be used to find a value at an unknown location. For example, if the point object phase spectrum φ O (4, 5) is desired, then: φ O (4, 5) = φ O (1, 0) + φ O (3, 5) φ B ((1, 0), (3, 5)) = φ O (1, 0) + φ O (3, 5) φ B ((1, 0), (3, 5)) = φ O (2, 3) + φ O (2, 2) φ B ((2, 3), (2, 2)) (2.39) = φ O (3, 4) + φ O (1, 1) φ B ((3, 4), (1, 1)) 21

and so forth. Each linear combination does not necessarily give the same results if noise is present. Thus, like the cross spectrum method, many paths are typically calculated and then averaged [21][31]. Lastly, due to a potential 2π bias when taking different paths, the calculations are typically done as unit phasors, thus, the final algorithm for reconstructing phase using the bispectrum is [21]: e jφ O(f 1 +f 2 ) = e jφ O(f 1 ) e jφ O(f 2 ) e jφ B(f 1,f 2 ). (2.40) No example of implementing the bispectrum method is provided in this paper. The reader can refer to the following references for examples and more information, [2][21][24][25][26]. Both the cross spectrum and bispectrum method have proven to be effective at reconstructing atmospherically blurred images to reveal binary pairs. However, as has been discussed before, if the object is only binary detection, then complete image reconstruction is unnecessary. By comparing the statistics of two hypothetical sources, i.e., a single and binary, better results for binary detection can be had then by visually inspecting reconstructed images. The next section will discuss another method of image reconstruction useful in binary detection, imaging by correlation. 2.1.3 Imaging by Correlation. This method of image recovery utilizes a process developed to recover meaningful information from random data and applies it to the problem of image recovery from second and third order correlation. This technique is unique in the fact that it simultaneously recovers the Fourier magnitude and phase as compared with speckle imaging in which amplitude and phase are recovered separately [34]. In general, correlation is an N th order process so thus for the purposes of binary detection N = 2, referred to as autocorrelation, will be used. The general strategy is to take the autocorrelation of the measured image data and the autocorrelation of an estimated image and then iterate through a log likelihood cost function to reduce the estimated image 22

to the most likely true image [34]. Let R(y) be the autocorrelation function of the measured image data, d(x), where R(y) = N x=1 d(x)d(x + y), the summation representing the sum over all pixels in d. Let R λ (y) be the autocorrelation of the estimated image, λ(x). Any cost function can be used, however in this work the I-divergence D(R λ, R) function used by Schulz and Snyder is adopted [34]: D(R, R λ ) = [R λ (y) R(y)] + R(y)ln R(y) R λ (y). (2.41) y By minimizing the I-divergence cost function we can solve for a λ(x) that is the most likely true image. Taking the derivative of D(R λ, R) with respect to a single point in the estimated image and then setting that equal to zero yields the necessary optimality condition: y D(R λ, R) λ(x o ) R(y) = [λ(x o + y) + λ(x o y)] R λ (y) [λ(x o + y) + λ(x o y)] = 0. (2.42) y y Schulz and Snyder then setup an algorithm that iteratively solves for an updated λ(x) based on a previous one [34]. For k iterations: λ k+1 (x) = λ k (x) 1 R 1/2 o y [R(y) + R( y)] λ k (x + y) 2R λk (y) where R o is the autocorrelation evaluated at y = 0 of the measured data. (2.43) Using the convolution and correlation theorems, multiplication can be used in the Fourier domain. For k iterations, an estimated reconstructed image, λ k (x) is found [34]. The autocorrelation is similar to the bispectrum in that image tilt does not need to be removed before processing. Figure 2.5 shows a simulated result after 100 iterations. Figure 2.6 shows the simulated result compared to the original image. The same simulation parameters used in the cross spectrum example in Section 2.1.2.1 were used. The software implementation of this method is discussed more in Chapter 3, as it is used as a comparison to the multihypothesis technique proposed in this paper. 23

Figure 2.5: Results with different numbers of iterations: (a) 1, (b) 5, (c) 10, (d) 15, (e) 80, and (f) 100 iterations. 24

Figure 2.6: Results of the correlation technique: (a) original image, (b) original image cross section, (c) reconstructed image, and (d) reconstructed image cross section. 25

2.1.4 Binary detection by post-process image reconstruction summary. Image reconstruction via cross correlation, bispectrum and autocorrelation are all proven techniques that can enhance image spatial resolution and thus greatly impact binary detection. This section provided information for a basic understanding of how these methods can enhance images. This thesis will compare one of the methods listed above, image reconstruction by autocorrelation, to a statistical approach that does not provide a reconstructed image but focuses on the question Was this image created by a single or binary point source(s)? The next section will provide an overview of how a multi-hypothesis technique can be setup to answer this question. 2.2 Multi-Hypothesis Detection By assuming a source signal is either a zero, one, et cetera and calculating the expected value of each hypothesis, the most likely original signal can be determined. This very simple yet powerful logic is foundational to digital communication and other areas of electo optics such as Light Detection And Ranging (LiDAR) [29]. The same logic can be used in detecting binaries that have been distorted by the atmosphere. If we can measure how the atmosphere distorts a point source and simulate how that same atmosphere would distort various combinations of binary sources we can predict what an image should look like if it was the result of a binary or single point source, as perceived from the pupil plane of an imaging system. We can then compare what the image should look like given a single or binary point source to what was actually imaged through that same atmosphere. In this way a binary can be detected based on what we expect to see given two different scenarios, or hypothesis. Figure 2.7 shows the basic concept behind this idea. One of the most important assumptions made for using this technique is that a valid point spread function (PSF) can be measured that correlates to the image being analyzed for a potential binary. Another 26

important assumption, and an area for future research, is to assume more than just two scenarios to detect larger clusters then just a binary. Hypothesis for three, four, and so forth, as well as different shapes can all be used as comparisons for the image that was actually detected. The derivation of and detailed analysis of a proposed multi-hypothesis algorithm to detect binaries is given in Chapter 3. Results of simulated and measured data testing will be given in Chapter 4. Multi-Hypothesis for Binary Detection Overview 1 Measure Data 2 Calculate Log-likelihoods 3 Binary Decision Measured Image Hypothesis Zero Determine if image intensity is below minimum threshold for detection. Based on detection criteria determine which hypothesis is most likely Measured PSF Hypothesis One Calculate most likely intensity and position of single point source to produce measured image Hypothesis Two Calculate most likely intensity and position of two point sources to produce measured image Figure 2.7: Overview of binary hypothesis method. Chapter 3 will work through derivations of each step. 27

2.3 Chapter Summary As binary objects become closer and dimmer, detection becomes impossible for a standard imaging system. Using adaptive optics, looking for patterns, extracting higher resolution from speckle images, and estimating the image using correlation are current methods for binary detection. This thesis will explore a method that has not been used in astronomy for binary detection, referred to herein as the multi-hypothesis technique. Basically, by measuring the PSF we can calculate what a single point source and a binary source would look like, then compare that to what is detected. The result of this comparison can be used to make a statistical determination on whether the image detected was the result of a single point source or a binary point source. 28