On the FPA infrared camera transfer function calculation

Similar documents
Optics for Engineers Chapter 11

Optics for Engineers Chapter 11

SIMG Optics for Imaging Solutions to Final Exam

Introduction to aberrations OPTI518 Lecture 5

Today. MIT 2.71/2.710 Optics 11/10/04 wk10-b-1

Imaging Metrics. Frequency response Coherent systems Incoherent systems MTF OTF Strehl ratio Other Zemax Metrics. ECE 5616 Curtis

Measurement and application of an infrared image restoration filter to improve the accuracy of surface temperature measurements of cubes

Finite Apertures, Interfaces

Effect of object-to-camera distance on temperature and spatial resolution of a Thermal imaging system FLIR SC 5000

High-Resolution. Transmission. Electron Microscopy

In-situ emissivity measurement for land surface temperature determination

Composite A. Composite B Z X. More Info at Open Access Database by L. Ibos*, J. Dumoulin** and V. Feuillet*

Atmospheric Turbulence Effects Removal on Infrared Sequences Degraded by Local Isoplanatism

1. Abstract. 2. Introduction/Problem Statement

PH880 Topics in Physics

Interference, Diffraction and Fourier Theory. ATI 2014 Lecture 02! Keller and Kenworthy

n The visual examination of the image of a point source is one of the most basic and important tests that can be performed.

Design and Correction of optical Systems

Electro-Optical System. Analysis and Design. A Radiometry Perspective. Cornelius J. Willers SPIE PRESS. Bellingham, Washington USA

AOL Spring Wavefront Sensing. Figure 1: Principle of operation of the Shack-Hartmann wavefront sensor

In situ quantitative diagnosis of insulated building walls using passive infrared thermography

Adaptive Optics Lectures

AIR FORCE INSTITUTE OF TECHNOLOGY

Lens Design II. Lecture 1: Aberrations and optimization Herbert Gross. Winter term

Optical/IR Observational Astronomy Telescopes I: Optical Principles. David Buckley, SAAO. 24 Feb 2012 NASSP OT1: Telescopes I-1

PRINCIPLES OF PHYSICAL OPTICS

Syllabus for IMGS-616 Fourier Methods in Imaging (RIT #11857) Week 1: 8/26, 8/28 Week 2: 9/2, 9/4

2.71. Final examination. 3 hours (9am 12 noon) Total pages: 7 (seven) PLEASE DO NOT TURN OVER UNTIL EXAM STARTS PLEASE RETURN THIS BOOKLET

Characterizing the Degrading Effects of Scattering by Atmospheric Fog on the Spatial Resolution of Imaging Systems

Astronomy 203 practice final examination

Introduction to Interferometer and Coronagraph Imaging

Analysis of the signal fall-off in spectral domain optical coherence tomography systems

EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)

Lecture 11: Introduction to diffraction of light

DIAGNOSIS OF INSULATED BUILDING WALLS USING PASSIVE INFRARED THERMOGRAPHY AND NUMERICAL SIMULATIONS

Lecture 16 February 25, 2016

Active Infrared Thermography for Non-Destructive Control for Detection of Defects in Asphalt Pavements

Lock-in Thermography on Electronic Devices Using Spatial Deconvolution

Use of thermal sieve to allow optical testing of cryogenic optical systems

The Thermal Sieve: a diffractive baffle that provides thermal isolation of a cryogenic optical system from an ambient temperature collimator

Fig. 1. Schema of a chip and a LED

Nature of Light Part 2

Thermal behavior study of the mold surface in HPDC process by infrared thermography and comparison with simulation

EGU Land surface temperature measurement with an infrared camera, in-situ Emissivity measurement

Chapter 6 Aberrations

Transmission Electron Microscopy

Application of Image Restoration Technique in Flow Scalar Imaging Experiments Guanghua Wang

BioE Exam 1 10/9/2018 Answer Sheet - Correct answer is A for all questions. 1. The sagittal plane

Fully achromatic nulling interferometer (FANI) for high SNR exoplanet characterization

Physical and mathematical model of the digital coherent optical spectrum analyzer

An Example of Telescope Resolution

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

Stray light analysis of an on-axis three-reflection space optical system

Lecture 9: Introduction to Diffraction of Light

Notes on the point spread function and resolution for projection lens/corf. 22 April 2009 Dr. Raymond Browning

Section 22. Radiative Transfer

Foundations of Image Science

Real Telescopes & Cameras. Stephen Eikenberry 05 October 2017

2.710 Optics Spring 09 Solutions to Problem Set #6 Due Wednesday, Apr. 15, 2009

Today s lecture. Local neighbourhood processing. The convolution. Removing uncorrelated noise from an image The Fourier transform

Computer Vision & Digital Image Processing

CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION

Section 10. Radiative Transfer

Digital Holographic Measurement of Nanometric Optical Excitation on Soft Matter by Optical Pressure and Photothermal Interactions

Sub-pixel edge location in thermal images using a mean square measure

Lecture 9: Indirect Imaging 2. Two-Element Interferometer. Van Cittert-Zernike Theorem. Aperture Synthesis Imaging. Outline

Phys 531 Lecture 27 6 December 2005

Inverse problems in statistics

ROINN NA FISICE Department of Physics

Speckles and adaptive optics

Imaging Geo-synchronous Satellites with the AEOS Telescope

Heriot-Watt University

Title. Statistical behaviour of optical vortex fields. F. Stef Roux. CSIR National Laser Centre, South Africa

Wavefront Sensing using Polarization Shearing Interferometer. A report on the work done for my Ph.D. J.P.Lancelot

Fundamental Concepts of Radiometry p. 1 Electromagnetic Radiation p. 1 Terminology Conventions p. 3 Wavelength Notations and Solid Angle p.

If the wavelength is larger than the aperture, the wave will spread out at a large angle. [Picture P445] . Distance l S

The science of light. P. Ewart

Double-distance propagation of Gaussian beams passing through a tilted cat-eye optical lens in a turbulent atmosphere

Lecture 19 Optical MEMS (1)

Restoration of atmospherically blurred images according to weather-predicted atmospheric modulation transfer functions

Supplementary Figure 1: Example non-overlapping, binary probe functions P1 (~q) and P2 (~q), that add to form a top hat function A(~q).

Optical/IR Observational Astronomy Telescopes I: Telescope Basics. David Buckley, SAAO

Final examination. 3 hours (9am 12 noon) Total pages: 7 (seven) PLEASE DO NOT TURN OVER UNTIL EXAM STARTS PLEASE RETURN THIS BOOKLET

Response of DIMM turbulence sensor

Vladimir Vavilov. Tomsk Polytechnic University, Russia

Correction of Errors in Polarization Based Dynamic Phase Shifting Interferometers

Polarization Shearing Interferometer (PSI) Based Wavefront Sensor for Adaptive Optics Application. A.K.Saxena and J.P.Lancelot

Astro 500 A500/L-7 1

5. LIGHT MICROSCOPY Abbe s theory of imaging

Analysis of Thermal Diffusivity of Metals using Lock-in Thermography

Inverse Heat Flux Evaluation using Conjugate Gradient Methods from Infrared Imaging

Active Infrared Thermography applied to detection and characterization of non emergent defects on asphalt pavement

Wigner distribution function of volume holograms

Aperture-Averaging - Theory and Measurements

Design and Correction of Optical Systems

ERRORS. 2. Diffraction of electromagnetic waves at aperture stop of the lens. telephoto lenses. 5. Row jittering synchronization of frame buffer

Astronomical Optics. Second Edition DANIEL J. SCHROEDER ACADEMIC PRESS

Light Propagation in Free Space

High resolution traction force microscopy on small focal adhesions improved accuracy through optimal marker distribution and optical flow tracking

Math 46, Applied Math (Spring 2009): Final

Transcription:

On the FPA infrared camera transfer function calculation (1) CERTES, Université Paris XII Val de Marne, Créteil, France (2) LTM, Université de Bourgogne, Le Creusot, France by S. Datcu 1, L. Ibos 1,Y. Candau 1, S. Matteï 2 Abstract In order to obtain a good quantification of the heat flux emitted by a surface using infrared images, it is necessary to calculate accurate values of the surface temperature. Infrared images often distorted due to aberration and diffraction phenomena and electronic noise. A convolution product can model aberration and diffraction distortions and the detector noise is assumed to be stationary and additive. Image restoration can be treated as an ill-posed problem. Its solution is commonly obtained using regularisation methods. 1. Introduction The physical approach used in this work is to study the correspondence between all surface elements of an object M observed by an infrared camera and their images in the image plane (i.e. the detector matrix). The luminance of each surface element M 0 of coordinates ( x o, yo ) can be represented in the object plane by a two-dimension Dirac distribution δ ( x o x, y o y ). The image of the element M0 through an optical system is a ' ' blur image located around the corresponding elements M 0 x0,y 0 in the image plane. The irradiance distribution of this image can be represented by the impulse response h, also called the point-spread function (PSF). This function is often considered as position invariant in a limited area of the image plane [1,2]. Considering an object M with a luminance distribution L o ( x, y), the irradiance distribution E(x',y') in the image plane is given by the convolution product between L0 and the impulse response h [3]: ( x, y ) = L ( x, y ) h( x, y ) E o ( ' ) ( ξ, η) = ( ( x', y' )) The spectrum of the impulse response h x', y', H II h is the transfer function of the infrared viewing system; II is the two-dimensional Fourier transform operator and ξ, η are spatial frequencies in the Fourier space. The infrared systems consist of several subsystems, each with is own impulse response and the final result of all the subsystems operating on the input object distribution is the multiplication of their respective transfer functions. We assume that H( ξ, η) has been normalized to have unit value at zero spatial frequency. This normalization yields a relative transmittance for the various frequencies and ignores attenuation factors that are independent of spatial frequency. With this normalization, H( ξ, η) is referred as the optical transfer function (OTF) which is generally a complex function having both a magnitude (the modulation transfer function MTF) and a phase (phase transfer function PTF) portion: (1) OTF = H ( j θ ( ξ, η )) ( j PTF ( ξ, η )) ( ξ, η) = H( ξ, η) e = MTF ( ξ, η) e (2) 236

The aim of this work is to determinate the overall transfer function of an infrared camera system. In the case of an infrared system of focal plane array (FPA) detectors, it consist of four components [2]: the diffraction transfer function, the geometrical transfer function OTF, the footprint detector transfer function OTF and the geometrical ling transfer function OTF. 6.1 Diffraction OTF ling OTF diffraction footprint Because of the wave nature of light, an optical system with a finite-sized aperture can never form a point image. Their characteristic minimum blur diameter, when other defects (aberrations) are absent, is [2]: d = 2.44 λ F D (3) diffraction where F is the distance from the lens to the object plane and D is the aperture size of the lens. Conceptually, the diffraction OTF can be calculated as the magnitude of the Fourier transform of the diffraction impulse response profile. Also, the diffraction OTF can be calculated as the normalized autocorrelation of the exit pupil of the system [1,2]. For incoherent systems, the impulse response h ( x, y ) is the square of the two-dimensional Fourier transform of the diffracting aperture p x, y : h diffraction ( x, y ) II( p( x, y )) 2 = (4) with: p ( x, y ) = rect ( x D, y D) (5) Then, the diffraction OTF is given by: OTF ( ξ, η) = II( h ( x y )) diffractio n diffraction, (6) 6.2 Geometrical OTF Considering first the one-dimensional case. A point source of light, of wavelength λ, is located at infinity along the optical axis, from which spherical waves emanate. At an infinite distance from the lens they appear as planes waves, as illustrated at the far left of figure 1. λ β LENS ( β ) r IMAGE PLANE x β 0 F s(x) y Fig. 1. Image formation physics 237

At each coordinate β of the pupil, there is a phase distance error from a sphere with radius F (i.e., which perfectly focuses light in the image plane). The lens aperture size β 0 is assumed to be small enough to allow Huygen's scalar theory of light to be used. Huygen's principle states that each point β of the wave front just to the right of the lens acts like a j k (β ) new source of spherical waves. This defines a complex "pupil function" e at each point β of the pupil [1] ( k = 2 π λ is the wave number). The focused image has an h( x) intensity profile (the point spread function) which is located in the image plane. This may be represented in terms of a point amplitude function a(x) as: ( x) a( x) 2 h = (7) Quantity a(x) relates to the pupil function by Fraunhofer's integral: a β [ j ( k ( β ) ( x) dβ e = 0 β0 β x )] (8) which is basically a finite Fourier transform of the pupil function. Finally, the optical transfer function OTF(ξ) is the Fourier transform of h(x). Extending this result to the two-dimensional case, we obtain: k ( β1 2 ) (, ), β a x y = II e j ( x, y ) a( x, y ) 2 h = OTF ( ξ, η) = II( h ( x y )) (9) geometrica l, Considering that the optical pupil is generally aberrated, with a pupil function Φ( β1 2 ), β U β1, β2 = e j, where Φ(β 1, β 2 ) is the aberration function [1] and supposing that this function can have a third degree polynomial form depending on the pupil coordinates (β 1, β 2 ): 3 2 2 2 2 Φ β1, β2 = a1 ρ + a2 ρ + a3 ρ, ρ = β1 + β2 (10) This first assumption implies that the irradiance distribution on the aberration blur area is approximated by the most likely diffraction irradiance distribution and the second one allows to predict the evolution of the equivalent diffraction blur on the image plane. Then, the geometrical impulse response has a parametrical form: ( x, y, Φ( β1, 2 )) = h( x, y ai ) h = h β, (11) 6.3 Detector footprint OTF The footprint of a particular detector is its projection onto the object space. If we consider a FPA imaging system, the flux falling onto an individual detector produces a single output which is some spatial averaging of the image irradiance on the corresponding footprint surface. The integration of the scene irradiance E(x, y) over the detector surface (considered as a square of side w) is equivalent to a convolution of L O (x, y) and the rect function that describe the detector responsivity: E w 2 w 2 ( x, y ) = L ( x x, y y ) dx dy = L ( x, y ) rect( x w, y w ) o o w 2 w 2 By Fourier transformation of the impulse response h ( x, y' ) rect( x' w, y' w ) obtain the OTF, which is a two-dimensional sinc function: (12) ' = we 238

OTF footpr int ( ξ, η) = sin c ( ξ w, η w ) (13) 6.4 Sampling OTF A led-imaging system is not shift invariant [2]. The position of the imageirradiance function with respect to the ling sites will affect the final image data. If we measure the OTF for such a led system, the Fourier transform of the spatial-domain image will depend on the alignment of the target and the ling sites. The shift variance violates one of the main assumptions required for a convolutional analysis of the imageforming process. A spatially averaged impulse response and the corresponding OTF component must be defined by assuming that the scene being imaged is randomly positioned with respect to ling sites. For a two-dimensional ling grid, the ling impulse response is a rectangle function of widths equal to the ling intervals in each direction: ling ( x x y y ) h = rect, (14) and the OTF is defined by the Fourier transform of h ling x, y : OTF ling 2. Experimental set-up ( ξ η) = I( rect ( x x, y y ) = sinc ( ξ x, η y ), (15) Some of the major technical specifications of the camera used in this study (model AGEMA 570 Elite from FLIR Systems ) are listed in table 1. The experimental set-up is schematically presented in the figure 2. K-type thermocouple Peltier elements heat source Aluminum based coating ε = 0.63 Camera lens Black coating ε = 0.97 µ-metric shift turntable Fig. 2. Experimental set-up for transfer function camera measurement The target scene is made of a 45 x 45 x 5 mm duralumin plate. In order to create an infrared knife-edge source, the viewed face of the plate was split in two equal vertical surfaces. One surface was coated with a black paint with emissivity 0.97, while the other one was coated with an aluminum paint with emissivity 0.63. The coatings used are very diffusive. This prevents any specular reflections of infrared external source. The Duralumin plate is heated by Peltier elements which allows to keep a stable temperature of the plate during measurements. The temperature of the plate is controlled by a K-type thermocouple. A fuzzy logic controller programmed under LabView regulates the heat flux supplied by the Peltier elements. The measurement of the transfer function of the camera was made for a fixed distance of 54.5 cm between the camera lens and the target 239

scene. The acquisition of thermograms was made at a rate of about 7 images/second. The object parameters imposed to the camera during acquisition are listed in table 2. The camera was placed on a turntable which allowed micro-metric step translations with a total length of 2.5cm. The step accuracy is 10µm. Field of view Table 1. Technical specifications of the Agema 570 camera IFOV Spectral range Accuracy Image size 45 x 34 2.45 mrad 7.5 13 µm ±2% of range or ±2 C 320 x 240 pixels Relative Humidity Table 2. Object parameters used for infrared images acquisition Distance Camera-Object Ambient Temperature Atmospheric Temperature 50 % 54.5 cm 20 C 20 C 1 1 ε τ 3. Distortion model and identification procedure Generally, we can write: g ( x, y ) = f ( x, y ) h( x x, y y, ai ) dx dy + n( x, y ) (16) where g(x, y ) is the irradiance distribution in the image plane, f(x, y) is the luminance distribution in the object plane, h(x -x, y -y, a i ) is the impulse response function of the imaging system and n(x, y ) is the noise (ergodic stationary Gaussian noise). The use of a lens with a large field of view implies a non-linear behavior of the camera optical system (i.e. the presence of lens aberrations). This behavior compelled us to define a multi-value impulsion response function in order to take this non-linearity into account. We can make a linear approximation of this problem by splitting the image in several areas where the camera response can be approximated by a convolution product. For each of these areas, a local impulse response can be defined (equation 11) and its values are given by the overall parametric PSF h(x -x, y -y, a i ) where the parameters a i allow to compute the local value of the aberration function Φ(x, y ) (equation 10). Our approach to determine the camera transfer function is to use an infrared knifeedge source and to acquire the camera edge spread function (ESF) [2]. The ESF is the convolution of the PSF with the unit-step function: ESF ( x) = PSF( x, y ) step( x) (17) The camera response is noisy and thus we cannot solve the equation 17 for PSF by direct deconvolution because the deconvolution is an ill-posed problem [3]. In order to improve the signal-to-noise ratio, we averaged N infrared images. Taking N > 30, the central limit theorem allows us to consider a Gaussian distribution for the resulting noise [1] and thus, the variance of the mean resulting noise is N times lower then the initial one. To avoid solving the problem by direct deconvolution, we proceeded by parametric identification. Because we cannot compute the PSF form ESF measurements, the parametric identification was performed for the amplitude portion of the PSF, the modulation transfer function (MTF). 240

PSF k i = 1..3 MTF k ( x y ) = h ( x, y ) h ( x, y ) h ( x, y ) h ( k, diffraction geometrical footpr int ( ξ, η) = II( PSF ( x, y )) k ling x, y ) (18) where k is the index of the area on the plane of view of the infrared camera. The diffraction, footprint and ling PSF are known because we know the size of the aperture lens, the detector and the cell detector size. The geometrical PSF is the only unknown function. It is depending on the value of the aberration function Φ. In a first step we will identified the Φ k values on each of the measurement areas and in a second step we will compute the coefficients a i of the polynomial which fits the identified Φ k values. The "measured" values of the MTF used for fitting the analytical model are obtained from the ESF data [2] by first computing a line spread function (LSF) profile: d dx ( ESF( x) ) = LSF( x) (19) The magnitude of the one-dimensional Fourier transform of the line response provides information concerning one spatial-frequency component of the transfer function [2]: 4. Results ( ( x) ) MTF( ξ, 0) I LSF = Seven ESF were measured on seven position of the knife-edge source on the image plane. The thermogram obtained in the center of the field of view (FOV) is presented on figure 3. For each position, the corresponding ESF was measured in about 20 points in order to obtain an accurate slope ling of the ESF data (figure 4). Using equations 19 and 20, we calculated the corresponding MTF components. The results are presented in figure 5. For each position, a Φ(β 1,β 2 ) value has been identified in order to obtain the best MTF fit. Two results of the fitting of the transfer function are presented in figure 6. (20) Fig. 3. Infrared image of the knife-edge source obtained in the center of FOV 241

ϕ (a.u.) 1.0 Center of FOV ESF 0.8 LSF 0.6 0.4 0.2 MTF (a.u.) 1.0 0.8 0.6 0.4 MTF k (ξ,0) Pixels from center of FOV: 0 19 40 60 80 100 120 0.0 0.2-0.2-3 -2-1 0 1 2 3 x (mm) Fig. 4. Measured ESF and LSF in the center of FOV 0.0 0.0 0.2 0.4 0.6 0.8 1.0 ξ/ξ CutOff Fig. 5. Measured MTF at various position from the center of FOV MTF (a.u.) 1.0 0.8 0.6 0.4 Center of FOV: Measured MTF Calculated MTF 120 pixels from center of FOV Measured MTF Calculated MTF 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 ξ/ξ CutOff Fig. 6. MTF results in the FOV center and at 120 pixels from the FOV center A good agreement between the "measured" and the calculated MTF component is observed, even on the proximity of the FOV limit of the camera (i.e. at 120 pixels from the FOV center the extreme value is 160 pixels from the center). The values of the three parameters a i are obtained by least square fitting on the Φ(β 1, β 2 ) scalar values and we obtained the following values: [-1.36 5.90 10-1 -2.05 10-2 ]. The residual norm of the fitting was low (7.08 10-7 ) compared to values of Φ(β 1, β 2 ) of about 50. Two of the resulting PSFs, computed from the equation 18, are presented in the figure 7. (a) (b) Fig. 7. PSF in the FOV center (a) and at 120 pixels from the FOV center (b) 242

5. Conclusion Obviously, the present model approximates locally the geometrical aberration blurs by the most likely diffraction blur (geometrically speaking). We obtained good MTF fit results that confirm the validity of this assumption in the studied configuration. The low value of the fit residuals validates the second assumption of the model: the size of the blur can be correlated with the position of the considered area center by a third degree polynomial. The next step will be image restoration by deconvolution with the PSF kernel obtained in this study. Acknowledgments This work was performed under the auspices of the R&D Department of Electricité de France and we would like to thank the support of Jean-Claude Frichet. REFERENCES [1] FRIEDEN, B. R., "Probability, statistical optics, and data testing", Third Edition, Springer, Berlin 2001 [2] BOREMAN, Glenn D., "Modulation transfer function in optical and electro-optical systems", SPIE, 2001 [3] PAPINI, F. and GALLET, P., "Thermographie infrarouge", Masson, Paris 1990. 243