STATISTICAL AND ANALYTICAL TECHNIQUES IN SYNTHETIC APERTURE RADAR IMAGING

Size: px
Start display at page:

Download "STATISTICAL AND ANALYTICAL TECHNIQUES IN SYNTHETIC APERTURE RADAR IMAGING"

Transcription

1 STATISTICAL AND ANALYTICAL TECHNIQUES IN SYNTHETIC APERTURE RADAR IMAGING By Kaitlyn Voccola A Thesis Submitted to the Graduate Faculty of Rensselaer Polytechnic Institute in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY Major Subject: MATHEMATICS Approved by the Examining Committee: Margaret Cheney, Thesis Adviser William Siegmann, Member David Isaacson, Member Matthew Ferrara, Member Richard Albanese, Member Rensselaer Polytechnic Institute Troy, New York August 2011 (For Graduation August 2011)

2 c Copyright 2011 by Kaitlyn Voccola All Rights Reserved ii

3 CONTENTS LIST OF TABLES iv LIST OF FIGURES v ACKNOWLEDGMENT viii ABSTRACT xi 1. Introduction Standard SAR & Backprojection Model for SAR data Image Formation, Backprojection, and Microlocal Analysis SAR and Detection & Estimation Detection & Estimation Theory and the Generalized Likelihood Ratio Test Detection & Estimation The Generalized Likelihood Ratio Test Continuous GLRT General Continuous-Time Random Processes Reproducing-Kernel-Hilbert-Space Representations of Continuous-Time Random Processes The Relationship between the GLRT and Backprojection in SAR imaging Polarimetric synthetic-aperture inversion for extended targets in clutter Introduction Polarimetric Concepts Radar Cross Section for Extended Targets Radar Cross Section and Polarimetric Radar Cross Section Method of Potentials Solution of Maxwell s Equations Helmholtz Equation in Cylindrical Coordinates Scattering in Two Dimensions RCS for Infinitely Long Cylinder iii

4 Normal Incidence Oblique Incidence Finite Cylinder RCS Dipole SAR Scattering Model Comparison to the extended target RCS model Scattering Model for the Target Scattering Model for Clutter Total Forward Model Image Formation in the presence of noise and clutter Statistically Independent Case Correlated Clutter and Target Case Numerical Simulations Numerical Experiments Example One - Horizontally Polarized Target Example Two - Vertically Polarized Target Example Three - 45 Polarized Target Conclusions and Future Work LITERATURE CITED APPENDICES A. FIOs and Microlocal Analysis A.1 Fourier Integral Operators A.2 Microlocal Analysis B. Calculation of the Radiation of a Short Dipole B.0.1 Vector potential B.0.2 Far-field radiation fields B.0.3 Radiation vector for a dipole iv

5 LIST OF TABLES 4.1 Initial SCR in db vs. Final Standard Processed Image SCR in db, horizontally polarized target Initial SCR in db vs. Final Coupled Processed Image SCR in db, horizontally polarized target Initial SCR in db vs. Final Standard Processing Image SCR in db, vertically polarized target Initial SCR in db vs. Final Coupled Processing Image SCR in db, vertically polarized target Initial SCR in db vs. Final Standard Processing Image SCR in db, 45 polarized target Initial SCR in db vs. Final Coupled Processing Image SCR in db, 45 polarized target v

6 LIST OF FIGURES 4.1 Linear, Circular, and Elliptical Polarization States The Polarization Ellipse Cylindrical Coordinates Scattering scenario with infinite length cylinder lying along the z-axis, incident field is normal to the cylinder [28] Scattering Scenario for an infinite length cylinder when the incident field makes an angle φ with the x y plane (oblique incidence) [28] Scattering Scenario for a finite length cylinder [28] Spherical coordinates HH component of target vector and target plus clutter vector, horizontally polarized target HH, HV, and VV target only data for the case, horizontally polarized target HH, HV, and VV target embedded in clutter data for the case, horizontally polarized target HH image created using the standard processing vs. the true target function HH image created using the coupled processing vs. the true target function SCR vs. MSE for the standard processed images and coupled processed images respectively, horizontally polarized target VV component of target vector and target plus clutter vector, vertically polarized target HH, HV, and VV target only data for the case, vertically polarized target HH, HV, and VV target embedded in clutter data for the case, vertically polarized target VV image created using the standard processing vs. the true target function vi

7 4.18 VV image created using the coupled processing vs. the true target function SCR vs. MSE for the standard processed images and coupled processed images respectively, vertically polarized target HV component of target vector and target plus clutter vector, 45 polarized target HH, HV, and VV target only data, 45 polarized target HH, HV, and VV target embedded in clutter data, 45 polarized target HV image created using the standard processing vs. the true target function HV image created using the coupled processing vs. the true target function SCR vs. MSE for the standard processed images and coupled processed images respectively, 45 polarized target vii

8 ACKNOWLEDGMENT To my advisor, Dr. Margaret Cheney, for her support and encouragement. Thank you for your guidance and for sharing your expertise, for allowing me to explore what I found most interesting. To Dr. Birsen Yazici, Dr. Matthew Ferrara, and Dr. Richard Albanese thank you for the many fruitful discussions, all your input is invaluable. Also to Dr. William Siegmann and Dr. David Isaacson thank you for being a part of my committee and for all your time and input to my thesis. To my friends and colleagues in the math department, Lisa Rogers, Analee Miranda, Heather Palmeri, Tegan Webster, Joseph Rosenthal, Jessica Jones, Ashley Thomas, Peter Muller, and Jensen Newman thank you for your support, friendship, and our productive discussions as well as our gossip sessions. In particular a thank you to Joseph Rosenthal for his computer support and his help with the figures in this document. Also a special thank you to Dawnmarie Robens for always being there to listen and for getting me through some of my toughest days as a graduate student. To Subin George, Meredith Anderson, Jismi Johnson, Laura Hubelbank, Tinu Thampy, Peter Ruffin, Ebby Zachariah, Sam John, thank you for being incredible friends and my second family. Also a special thanks to Mrs. Mosher for getting me through my worst days and making it possible for me to get to this point. To my many amazing teachers along the way who have encouraged my love of mathematics and science and shared with me their passion for learning thank you, Mrs. Fedornak, Mrs. Dunham, Ms. Gomis, Mr. Seppa, Mme. Refkofsky, and Dr. Kovacic. To Stevie, thank you for being my first example, for bringing me laughter and for always finding a way to make me smile. Thank you for being who you are, your creativity and passion inspire me. To Elissa, my best friend, for your support, your love, your phone calls and bathroom dance parties. You believe in me more than I do myself and I will always viii

9 be thankful. Your drive and your success amaze me everyday, and I will forever be your biggest fan. To my parents, for everything. Thank you for your constant love and support, and for allowing us to always dream big dreams. ix

10 For Mom, for never letting me give up. x

11 ABSTRACT In synthetic-aperture radar (SAR) imaging, a scene of interest is illuminated by electromagnetic waves. The goal is to reconstruct an image of the scene from the measurement of the scattered waves using airborne antenna(s). This thesis is focused on incorporating statistical modeling into imaging techniques. The thesis first considers the relationship between backprojection in SAR imaging and the generalized likelihood ratio test (GLRT), a detection and estimation technique from statistics. Backprojection is an analytic image reconstruction algorithm. The generalized likelihood ratio test is used when one wants to determine if a target of interest is present in a scene. In particular it considers the case when the target depends on a parameter which is unknown prior to processing the data. Under certain assumptions, namely that the noise present in the scene can be described by a Gaussian distribution, we show that the test statistic calculated in the GLRT is equivalent to the value of a backprojected image for a given location in the scene. Next we consider the task of developing an imaging algorithm for extended targets embedded in clutter and thermal noise. We consider the case when a fully polarimetric radar system is used. Also note that we assume scatterers in our scene are made up of dipole scattering elements in order to model the directional scattering behavior of extended targets. We formulate a statistical filtered-backprojection scheme in which we assume the clutter, noise, and the target are all represented by stochastic processes. Because of this statistical framework we choose to find the filter which minimizes the mean-square error between the reconstructed image and the actual target. Our work differs from standard polarimetric SAR imaging in that we do not perform channel-by-channel processing. We find that it is preferable to use what we call a coupled processing scheme in which we use all sets of collected data to form all elements of the scattering matrix. We show in our numerical experiments that not only is mean-square error minimized but also the final signal-to-clutter ratio is reduced when utilizing our coupled processing scheme. xi

12 CHAPTER 1 Introduction In synthetic-aperture radar (SAR) imaging, a scene of interest is illuminated by electromagnetic waves. The goal is to reconstruct an image of the scene from the measurement of the scattered waves using airborne antenna(s). This thesis is focused on the use of statistics in SAR imaging problems. The theory of statistics has been widely used in detection and estimation schemes. These techniques process radar data in order to determine if a target is present in the scene of interest and also to estimate parameters describing the target. However most existing imaging algorithms were derived based on purely deterministic considerations. The significant and unwelcome effects of noise make the inclusion of statistical aspects critical. In addition the complexity of the scattering objects suggests a stochastic modeling approach. For example, treating foliage as a random field seems an obvious choice, but a stochastic model is best even for objects such as vehicles whose radar cross-section varies widely with angle of view. The first body of work in this thesis investigates the relationship of methods that arise from this statistical theory and a standard imaging technique, backprojection. In particular we consider the relationship between backprojection in SAR imaging and the generalized likelihood ratio test (GLRT) [39]. Backprojection is a commonly used analytic image reconstruction algorithm [1, 2]. It has the advantage of putting the visible edges of the scene at the right location and orientation in the reconstructed images. This property can be shown using the theory of microlocal analysis [47, 48, 49, 50]. The generalized likelihood ratio test is used when one wants to determine if a target of interest is present in a scene. In particular it considers the case when the target depends on a parameter, such as location, which is unknown prior to processing the data. We focus on the GLRT because of its wide use in SAR detection problems [17, 23, 39]. Emanuel Parzen developed an entire theory of representing stochastic processes in terms of reproducing kernel Hilbert spaces [8]. This theory enables one to formulate detection and estimation algorithms, in- 1

13 2 cluding the GLRT, for the case when the data is defined on a continuous index set. Parzen developed this theory mainly for communication applications. This dissertation shows how this theory can be applied to radar problems. We show, moreover, that the test statistic calculated in the GLRT under Gaussian noise assumptions is equivalent to the value of the backprojected image at each pixel or location on the ground. A summary of this work appears in [46]. A similar connection was noted in [6] for image formation using the Radon transform in the case when the image is parametrized by a finite number of parameters. This result sheds light on the overlapping results that are derived from two very different theories. We find that in special cases utilizing microlocal analysis in developing imaging algorithms produces the same data processing techniques as those derived using statistical theory. This suggests that using both techniques and also further investigation of this relationship can lead to a better understanding of image reconstruction. The second half of the the thesis focuses on the develop of a hybrid technique that uses both analytical and statistical theory in the framework of backprojection. This technique was previously developed for the scalar case in [15]. This work showed that incorporating the statistics of the scene into the imaging algorithm leads to clutter mitigation and also minimizes the effect of noise on the image. We extend these results for a full vector treatment of the transmission and scattering of the electromagnetic waves. That is, we consider the case when a fully polarimetric radar system is used. Polarimetric radar has the advantage of producing multiple sets of data during a single data collection. Therefore it provides one with more information for the image reconstruction or detection task. However previous work has not shown that this extra information actually improves image quality or detection ability enough to justify the additional hardware and computation cost incurred when utilizing a polarimetric system. We present a technique that gives quantifiable improvements in image quality, namely reduced mean-square error (MSE) and improved final image signal-to-clutter ratio (SCR). We also note that most work in polarimetry has focused solely on detection and estimation schemes [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28]. It is typically assumed that one can reconstruct each element of the target scattering vector from

14 3 the corresponding data set. Therefore standard imaging schemes are applied to each set of data separately. In this work we begin by deriving the model for the scattered field systematically and uncover the assumption that leads to this treatment of polarimetric radar data. We choose to not make this assumption and we find that it is optimal to use what we called a coupled processing technique. Optimal in this case is in the mean-square sense. This coupled processing uses every data set to reconstruct each element of the target vector. This does add to the computation time of the imaging algorithm but we show that this improves MSE and final image SCR as stated above. Also this processing enables one to reconstruct target orientation correctly when typical polarimetric processing fails. We also note that in developing this polarimetric imaging scheme we consider specifically extended targets embedded in clutter and thermal noise. It is of interest in SAR to develop better target models that display anisotropic or directional scattering. A simple target that displays this behavior is a curve, though most man-made objects scatter electromagnetic waves anisotropically. We consider the curve specifically for simplicity but this work can be extended for more complicated targets. As we stated previously this is essentially the same as saying the radar cross section of man-made targets varies widely with angle of view. This type of scattering has been studied previously in many different capacities. For example in [29] the radar data is broken up into data received from different sets of observation angles, or different intervals of the bandwidth. This does help one to characterize the different returns from different angles, but it leads to a reduction in resolution by processing data defined on smaller apertures or with smaller bandwidths. We instead choose to characterize this behavior in the scattering or forward model. We assume scatterers in our scene are made up of dipole scattering elements as opposed to the typical point scatterer assumption. This idea to model scatterers as dipoles was previously considered in [27]. Dipole scatterers have the advantage of having an orientation and a location associated with them and they display anisotropic scattering behavior. Our work differs from [27] in that we make simplifying assumptions that allow us to write out an analytic expression for the image obtained. The work [27], on the other hand, focuses on a purely numerical scheme.

15 4 The thesis is organized as follows. Chapter 2 describes the standard SAR forward model using the scalar wave equation to describe the wave propagation. Next we consider the general method of filtered-backprojection and describe the pseudolocal property of the image-fidelity operator. The third chapter focuses on detection and estimation techniques, namely the generalized likelihood ratio test. We summarize the use of the Radon-Nikodym derivative and the reproducing-kernel- Hilbert-space representation of a stochastic process to write out the expression for the test statistic. We conclude with our result describing the relationship between backprojection and the GLRT. In the fourth chapter we shift our study to the task of imaging extended targets. We begin by reviewing the basics of polarimetry and also a full-vector solution of Maxwell s equations which is used to arrive at the expressions for the incident and scattered fields. We then outline our specific forward model for a dipole SAR system and using the assumption that scatterers on the ground are made up of dipole elements. Next we derive the optimal BP filters in the mean-square (MS) sense and finish with the results of numerical simulations. We conclude with final remarks about the scope and impact of this thesis work and describe areas in which the work may be utilized further.

16 CHAPTER 2 Standard SAR & Backprojection The received data in synthetic-aperture radar can be thought of as weighted integrals of a function T over some curved manifolds such as circles, ellipses, or hyperbolas. The function T models the reflectivity or irradiance of the region of interest. We consider three different SAR modalities: mono-static SAR, bi-static SAR, and hitchhiker, or passive, SAR. Mono-static SAR involves one antenna used for both transmission and reception while bi-static SAR utilizes two different antennas for transmission and reception possibly mounted on separate airborne platforms. Passive SAR involves one or more receiving antenna(s) which attempt to receive a scattered field that is the result of fields transmitted by sources of opportunity. These sources are transmitters already present in the environment such as cell-phone towers or satellites. We will derive the expression for the data in the mono-static case and give the analogous formulas for the bi-static and hitchhiker cases with references. We use the following font conventions: bold face italic font (x) denotes two-dimensional vectors and bold face roman font (x) denotes threedimensional vectors. Our goal is to end up with the following relationship between data, denoted d, and T, d = F[T ] (2.1) where F is called the forward operator. We will then seek an inverse of F, known as a backprojection operator, in order to reconstruct, or form an image of T. 5

17 6 2.1 Model for SAR data In SAR, the waves are electromagnetic and therefore their propagation is described by Maxwell s equations. In the time domain these equations have the form B(t, x) E(t, x) = t (2.2) H(t, x) = D(t, x) J (t, x) + t (2.3) D(t, x) = ρ (2.4) B(t, x) = 0 (2.5) where E is the electric field, B is the magnetic induction field, D is the electric displacement field, H is the magnetic intensity or magnetic field, ρ is the charge density, and J is the current density. Since most of the propagation takes place in dry air we assume that the electromagnetic properties of free space hold. Therefore we have that ρ = 0 and J = 0. We also have the free space constitutive relations which are expressed as follows: D = ɛ 0 E (2.6) B = µ 0 H. (2.7) Using these assumptions and taking the curl of (2.2) and substituting the result into (2.3) gives us the following expression: E = µ 0 ɛ 0 2 E t 2. (2.8) We use the triple-product identity (also known as the BAC-CAB identity) on the left-hand side of the above equation to obtain ( E) 2 E = µ 0 ɛ 0 2 E t 2. (2.9) Note that E = 0 because of the free space assumption, which implies we have 2 E = µ 0 ɛ 0 2 E t 2. (2.10)

18 7 This is equivalent to stating that in Cartesian coordinates each component of the vector E satisfies the scalar wave equation. Using similar steps we may say that H also satisfies the wave equation in free space. In particular these expressions hold for a constant wave speed, that is c 0 = (µ 0 ɛ 0 ) 1/2. When a scatterer is present the wave speed is no longer constant and it will change when the wave interacts with the object. We may think of this as a perturbation in the wave speed, which we write as 1 c 2 (x) = 1 c 2 0 T (x) (2.11) where T is known as the scalar reflectivity function. Using this function to describe scatterers is analogous to assuming all scatterers are made up of point scatterers, as pointed out in [27]. This will be contrasted in chapter 4 with a dipole model for scatterers. Using this variable wave speed we say that E satisfies the following wave equation: ( 2 1 c 2 (x) 2 t 2 ) E(t, x) = 0. (2.12) This model is often used for radar scattering despite the fact that it is not entirely accurate. The reflectivity function T really represents a measure of the reflectivity for the polarization measured by the antenna. In chapter 4 we will describe an object by a scattering vector dependent on all the different possible polarizations of the antennas. This second approach does not neglect the vector-nature of the electromagnetic fields. In order to incorporate the antennas we consider the total electric field E tot = E in +E sc which is composed of the incident and scattered fields. The full wave propagation and scattering problem is described by the following two wave equations: ( 2 1 ) 2 E tot (t, x) = j(t, x) (2.13) c 2 (x) t ( ) 2 E in (t, x) = j(t, x) (2.14) c 2 0 t 2 where j is a model for the current density on the antenna. Using (2.11) and sub-

19 8 tracting (2.14) from (2.13) results in the following expression for the scattered field: ( 2 1 ) 2 E sc (t, x) = T (x) 2 c 2 0 t 2 t E tot (t, x). (2.15) We then utilize the Green s function solution of the wave equation to arrive at the Lippmann-Schwinger integral equation [1]: δ(t τ x E sc y /c0 ) (t, x) = T (y) t 2 E tot (τ, y)dτdy. (2.16) 4π x y It is important to observe that E sc appears on both sides of the equation. Also note that this equation is nonlinear in that the two unknown quantities, E sc and T, are mutiplied on the right-hand side of (2.16). In order to linearize, and ultimately reconstruct T, we invoke the Born, or single-scattering, approximation. This amounts to replacing E tot with E in in (2.16). For more details on the Born approximation see [1]. We now obtain the following approximate expression for the scattered field: δ(t τ x E sc y /c0 ) (t, x) = T (y) t 2 E in (τ, y)dτdy. (2.17) 4π x y If we take the Fourier transform of (2.17) we obtain the frequency domain expression for the scattered field E sc (ω, x) = e ik x y 4π x y T (y)ω2 E in (ω, y)dy (2.18) where k = ω/c 0. We will now outline a specific model for the incident field and also explain how to take into account the fact that the antennas are in motion for the synthetic aperture radar case. i.) Mono-static SAR: Recall we assume that E in satisfies the scalar wave equation given in (2.14). In the frequency domain this equation is written as ( 2 + k 2 )E in (ω, x) = J(ω, x) (2.19) where J is the Fourier transform of j, the current density on the antenna. If we

20 9 again use the Green s function solution of the wave equation we have E(ω, x) = e ik x y J(ω, y)dy eik x x0 4π x y 4π x x 0 F (k, x x 0 ) (2.20) where x x 0 indicates we have taken the unit vector in the direction of x x 0. Also note we have used the far-field approximation and assumed that the antenna center is located at the position x 0. F is what is known as the radiation pattern of the transmitting antenna and is analogous to the radiation vector [1], which is explicitly calculated for a dipole antenna in Appendix B. This quantity is proportional to the Fourier transform of the current density. Using this model for the incident field we are able to write out the scattered field as E sc (ω, x) = e ik x y e ik y x0 4π x y T (y)ω2 4π y x 0 F (k, ŷ x 0 )dy. (2.21) To obtain the expression for the data received we evaluate E sc at the received antenna location, which is x 0 in the mono-static case. Therefore we have E sc (ω, x 0 ) = e 2ik x0 y A(ω, x 0, y)t (y)dy (2.22) where A, the amplitude, is given by A(ω, x 0, y) = ω2 F t (k, ŷ x 0 )F r (k, ŷ x 0 ) (4π x 0 y ) 2, (2.23) F t is the radiation pattern and F rec is the reception pattern of the antenna. In order to include the antenna motion in our model we assume the antenna follows a path γ R 3. This may also be thought of as the flight path for the aircraft which the antenna is mounted on in a SAR system. If we for example consider a pulsed system where pulses are transmitted at times t n, we have that the antenna has a position γ(t n ). To simplify our analysis we choose to instead parametrize the flight path with a continuous parameter denoted s. We call s the slow-time and in contrast now call t fast-time. We utilize these two different time scales because the scale on which the antenna moves is significantly slower than the scale on which

21 10 electromagnetic waves travel. We therefore have that the position of the antenna at a given slow time s is γ(s). We now replace x 0 with γ(s) in (2.22) to obtain the following expression for the received mono-static SAR data D(s, ω) = e 2ik γ(s) y A(ω, s, y)t (y)dy (2.24) where D is the Fourier transform of d. In the time domain we have d(s, t) = F[T ](s, t) = e iω(t rs,x) A(x, s, ω)t (x)dxdω, (2.25) where r s,x = 2 γ(s) x /c 0. (2.26) Note that for simplicity we will now assume flat topography where x = (x 1, x 2 ), x = (x, 0). Also observe we have obtained a linear relationship between the data, d, and the target reflectivity T, in terms of the forward operator F. The method of stationary phase can be used to show that the main contributions to (2.25) come from the critical set X = {(x 1, x 2, 0) : c 0 t = 2 γ(s) x }, (2.27) which consists of circles centered at γ(s) = (γ 1 (s), γ 2 (s), γ 3 (s)) with radius c 2 0 t 2 /4 γ 2 3(s). We now quote the analogous expression for the data received in the bi-static and hitchhiker SAR cases. ii.) Bi-static SAR: We begin by assuming that the transmitting antenna and receiving antenna are located on separate airborne platforms, and hence move along different paths γ T (s) R 3 and γ R (s) R 3, respectively. The data can be written [3] d(s, t) = F[T ](s, t) = e iω(t rs,x) A(x, s, ω)t (x)dxdω, (2.28)

22 11 where r s,x = γ T (s) x /c 0 + γ R (s) x /c 0. (2.29) The method of stationary phase can be used to show that the main contributions to (2.28) come from the critical set X = {(x 1, x 2, 0) : c 0 t = γ T (s) x + γ R (s) x }. (2.30) We conclude that our received data are closely related to integrals of T along ellipses. iii.) Hitchhiker SAR: This is a passive SAR modality which relies on sources of opportunity to image the ground irradiance. The system consists of airborne receiver(s) that traverse arbitrary paths over the scene of interest. The first step in received signal processing involves correlating the fast-time signal at different slowtime points. The tomographic reconstruction uses the correlated signal d ij defined by d ij (s, s, t) = d i (s, τ)d j (τ t, s + s )dτ. (2.31) The received signal at the ith receiver d i, is given as follows d i (s, t) = d i,y (s, t)dy. (2.32) This is a superposition of signals over all transmitters at locations denoted y. If we assume there are N 1 receiving antennas, each following a trajectory γ Ri (s) R 3, i = 1,..., N we can write [4] the Born approximated model of the correlated signal for receivers i and j d ij (s, s, t) = F[T ](s, s, t) = e iω(t r ij(s,s,x)) T (x)a(x, s, s, ω)dxdω (2.33) for i, j = 1,..., N where r ij (s, x) = x γ Ri (s) /c 0 x γ Rj (s + s ) /c 0 (2.34) is the hitchhiker range. Note A(x, s, s, ω) is again an amplitude which includes

23 12 geometrical spreading factors, antenna beam patterns, and transmitter waveforms. Here T is the scene irradiance. The leading-order contribution comes from the critical set X = {(x 1, x 2, 0) : c 0 t = x γ Ri (s) x γ Rj (s + s ) }, (2.35) which is a set of hyperbolas. In general the data in all three modalities can be written as d(s, t) = F[T ](s, t) = e iω(t φ(s,x)) A(x, s, ω)t (x)dxdω. (2.36) The general phase φ takes on the following forms φ(s, x) = r s,x = 2 γ(s) x /c 0 ) φ(s, x) = r s,x = γ T (s) x /c 0 + γ R (s) x /c 0 φ(s, x) = r ij (s, s, x) = x γ Ri (s) x γ Rj (s + s ) (2.37) in the mono-sar, bi-sar, and hitchhiker SAR cases respectively. Under mild conditions, the operator F of (2.36) that connects the scene T to the data d is a Fourier Integral Operator (FIO). A precise definition of an FIO can be found in Appendix A. It is known that the behavior of an FIO is determined mainly by the critical set X of the phase. In the case of mono-sar, X is the set of circles, for bi-static SAR, it is the set of ellipses, and in hitchhiker SAR it is the set of hyperbolas. An approximate inverse of F can be computed by another FIO which is described in the next section. 2.2 Image Formation, Backprojection, and Microlocal Analysis Let us now focus primarily, for the sake of simplicity, on the general model for the data (2.36). To form an image we aim to invert (2.36) by applying an imaging

24 13 operator K to the collected data. Because F is a Fourier integral operator we can compute an approximate inverse by means of another FIO. This FIO typically has the form of a filtered-backprojection (FBP) operator. In our case, we form an approximate inverse of F as a FBP operator, which first filters the data and then backprojects to obtain the image. Our imaging operator therefore takes on the form I(z) = K[d](z) := e iω(t φ(s,z)) Q(z, s, ω)dωd(s, t)dsdt = e iωφ(s,z) Q(z, s, ω)d(s, ω)dωds, (2.38) where z = (z 1, z 2 ), z = (z, 0), D(s, ω) is the Fourier transform of the data in fasttime, and Q is a filter which is determined in several different ways depending on the application. If we insert the equation for the data into above we have I(z) = KF[T ](z) = e iω(φ(s,x) φ(s,z)) Q(z, s, ω)a(x, s, ω)dωdst (x)dx. (2.39) In synthetic aperture imaging we are especially interested in identifying singularities, or edges and boundaries of objects, from the scene of interest in our image. The study of singularities is part of microlocal analysis, which also encompasses FIO theory [47, 48, 49, 50]. We define the singular structure of a function by its wavefront set which is the collection of singular points and their associated directions. A precise definition of the wavefront set is given in Appendix A. We use the concepts of microlocal analysis to analyze how singularities in the scene, or in T, correspond to singularities in the image. We can rewrite (2.39) as I(z) = L[T ](z) = L(z, x)t (x)dx, (2.40) where L = KF is known as the image-fidelity operator. Note L, the kernel of L, is called the point-spread function (PSF). We will show that L is a pseudodifferential operator which has a kernel of the form L(z, x) = e i(x z) ξ p(z, x, ξ)dξ, (2.41)

25 14 where p must satisfy certain symbol estimates as in the definition for a general FIO. See Appendix A for more details. It is essential that our image-fidelity operator be a pseudodifferential operator because this class of FIOs has the property that they map wavefront sets in a desirable way. This property is known as the pseudolocal property which states W F (Lf) W F (f), that is, the operator L does not increase the wavefront set. In the imaging setting this says that the visible singularities of the scene, T, are put in the correct location with the correct orientation. This is desirable for our application because we can say with certainty that no singularities or edges in the image are the result of artifacts. However, we note that it is possible some edges present in T may not appear in the image, especially if the viewing aperture is limited. Also we observe that singularities in the data due to multiple scattering effects which are not captured by the model may give rise to artifacts in the image. We now show that the filtered-backprojection operator is indeed a pseudodifferential operator and therefore produces an image where edges are in the correct location and have the correct orientation. In order to demonstrate that L is the kernel of a pseudodifferential operator our goal is to determine K, the imaging operator, so that L is of the form in equation (2.41). First we must ensure K is an FIO by requiring Q to satisfy a symbol estimate similar to that of p, i.e. we assume for some m Q sup ω α s β ρ 1 z 1 ρ 2 z 2 Q(z, s, ω) C 0 (1 + ω 2 ) (m Q α )/2 (s,z) K (2.42) where K is any compact subset of R R 2 and for every multi-index α, β, γ, there is a constant C = C(K, α, β, γ). Again see Appendix A for more details on the symbol estimate. We now must show that the phase of L can be written in the form Φ(x, z, ξ) = i(x z) ξ. In order to determine how close the phase is to that of a pseudodifferential operator, one applies the method of stationary phase to the s and ω integrals. The stationary phase method gives us an approximate formula for the large-parameter behavior of our oscillatory integral. First we introduce a large pa-

26 15 rameter β by the change of variables ω = βω. The stationary phase theorem tells us that the main contribution to the integral comes from the critical points of the phase, i.e. points satisfying the critical conditions 0 = ω Φ φ(s, x) φ(s, z) (2.43) 0 = s Φ s (φ(s, x) φ(s, z)). One of the solutions of the above equations is the critical point x = z. Other critical points lead to artifacts in the image, however their presence depends on the measurement geometry and the antenna beam pattern. We assume the flight trajectories and antenna beam patterns are such that we obtain only the critical point when x = z. In the neighborhood of the point x = z, we use a Taylor expansion of the exponent to force the phase to look like that of a pseudodifferential operator. We make use of the formula f(x) f(z) = 1 0 d f(z + µ(x z))dµ = (x z) dµ 1 0 f z+µ(x z) dµ (2.44) where in our case f(z) = ωφ(s, z). We then make the Stolt change of variables (s, ω) ξ = Ξ(s, ω, x, z) = 1 0 f z+µ(x z) dµ. (2.45) This change of variables allows us to obtain a new form of the point-spread function L(z, x) = e i(x z) ξ QA(x, s(ξ), ω(ξ)) (s, ω) ξ dξ. (2.46) From this expression, we see the phase of L is of the form required of pseudodifferential operators. With the symbol estimate requirements made on Q, we have shown that our image-fidelity operator is indeed a pseudodifferential operator. We conclude that the pseudolocal property holds and therefore our microlocal-analysisbased reconstruction method preserves the singularities or edges in our scene of interest.

27 CHAPTER 3 SAR and Detection & Estimation Detection and estimation theory is an area of statistics focused on processing random observations, or measurements. The goal is to extract information about whatever physical process led to the observations. Typically one assumes that there exists a random observation Y Γ, where Γ is known as the observation space. The two types of problems addressed in this theory are: (i). detection, in which one makes a decision among a finite number of possible situations describing Y ; and (ii). estimation in which one assigns values to some quantity not observed directly. To demonstrate these concepts more concretely we consider the case of radar, and SAR in particular. In this case Y is thought of as the signal received at the antenna, and Γ would be the set of all possible returns. The detection task would be to decide whether or not a specific scatterer, or target, contributed to Y. That is, we seek to decide if a target is present in the scene of interest or not. The estimation task seeks to determine quantities associated with a target such as location, shape, velocity, etc. In this work, we focus on the task of determining whether or not a target is present in the scene and also we will aim to estimate its location. This goal encompasses both detection and estimation and therefore we will utilize a technique which performs both tasks. In particular we consider the well known generalized likelihood ratio test (GLRT) [37, 39]. This test is designed specifically for the problem of detecting signals (targets) which depend on unknown parameters. It includes a detection step using the likelihood ratio test from Neyman-Pearson detection theory and also a maximum likelihood estimation step to determine the location (or any other desired unknown parameter). We motivate each of these steps individually and then explain in more detail how they are combined to form the GLRT. We begin with this background information on the general idea of estimation and detection schemes and the GLRT in particular. We then summarize Parzen s work on the use of the Hilbert space inner products for expressing the test statistic in the case of 16

28 17 continuous-time Gaussian random processes [8]. We conclude the chapter with our result of applying this work to the SAR problem and will demonstrate that the test statistic calculated in the case of the GLRT is in fact a backprojected image formed with a matched filter. 3.1 Detection & Estimation Theory and the Generalized Likelihood Ratio Test In most cases when we consider a detection or estimation problem our measurements will be corrupted by noise. The effect of this noise on the data is unknown. This leads to a probabilistic treatment of the task. We therefore must define a family of probability distributions on Γ, each of which corresponds to some state that contributed to the data or some set of the desired parameters one wishes to calculate. After we model these distributions we will then attempt to determine the optimal way of processing the data in order to make our decision or estimate the unknown parameters. This method is chosen based on a variety of criteria. For example the choice depends on whether the data is a discrete-time or continuous-time random process. Note the random process may instead be dependent on discrete or continuous spatial variables such as position. In addition the a priori information we have about the data, and also the way in which we will evaluate the performance of our detection and/or estimation scheme contributes to this decision. Usually one chooses between a Bayesian criterion, such as minimizing a cost function, and the Neyman-Pearson criterion in which we search for the most powerful test given a test size. We will go into more detail about these choices below. Note that in order to assign a probability distribution on Γ we first must determine probabilities for each subset of the observation space. In some cases it is not possible to perform this in a manner that is consistent for all subsets so we must assume that there exists a σ algebra, denoted G, containing all the subsets of Γ to which we are able to assign probabilities. We now call the pair (Γ, G) the observation space.

29 Detection & Estimation The detection part of our task may be formulated as a binary hypothesis testing problem in which we hope to decide whether the observation is associated with the target-absent or target-present situation. We call these two cases the null and alternative hypotheses respectively. We model these situations with the two probability distributions P 0 and P 1 defined on (Γ, G). We typically express this type of problem in the following manner: H 0 : Y P 0 H 1 : Y P 1. (3.1) We can also express this in terms of probability density functions p 0 and p 1 as H 0 : Y p 0 H 1 : Y p 1. (3.2) Specifically for the radar problem we write H 0 : Y = n H 1 : Y = d + n (3.3) where Y denotes the measured data, d is specifically the data from the target (typically given by equation (2.36)), and n is additive, or thermal, noise. Our goal is to process Y is such a way that we are able to determine if the target was present in our scene of interest. That is, we must decide if the null or alternative hypothesis is the true hypothesis describing our measured data. In order to process the data we define a decision rule denoted δ. This rule, or test, partitions the observation space Γ into two subsets Γ 1 G and Γ 0 = Γ C 1 where the superscript C denotes the complement. When Y Γ j we decide that H j is true for j = 0, 1. In the GLRT we choose the Neyman-Pearson criterion in order to define δ. This method seeks to maximize the probability of detection (also known as the

30 19 power of δ) for a given probability of false alarm (also known as the size of δ). It is important to observe that when using this criterion there will always be a trade-off between power, or probability of detection, and size, or probability of false alarm. We choose this criterion over the Bayes approach because it is often the case that we do not have prior information on the probability distributions describing the data. In Bayes estimation one typically assigns costs to our decisions. Therefore in this approach one seeks to minimize the Bayes risk, which is the average cost incurred by a decision rule δ. In order to calculate this quantity one must assign probabilities to the occurrences of H 0 and H 1. We do not necessarily have information on how often H 1 occurs versus H 0. Consequently for the GLRT we focus on the Neyman-Pearson approach from this point onward. We may express the process of maximizing the power given a certain size explicitly as: max P D (δ) subject to P F (δ) α (3.4) δ where P D is the probability of detection or the power of δ, P F is the probability of false alarm or the size of δ, and α is the desired test size, or accepted falsealarm rate. We define δ via the Neyman-Pearson lemma [37] which states that the likelihood ratio test is the uniformly most powerful test given the false-alarm rate α. Specifically we define δ as 1 if λ = p 1 (Y )/p 0 (Y ) > η δ(y ) = q if λ = p 1 (Y )/p 0 (Y ) = η 0 if λ = p 1 (Y )/p 0 (Y ) < η. (3.5) Note that p 1 and p 0 are the probability density functions of the data under each hypothesis. We determine η and q such that P F = p 0 (y)dy = α. The quantity λ = p 1 (Y )/p 0 (Y ) is known as the likelihood ratio. It is often referred to as the test statistic as well. It is well known that λ is a sufficient statistic for Y [37]. In words this test amounts to calculating the likelihood ratio λ and comparing it some threshold η. If λ exceeds η then we decide that a target was indeed present in our

31 20 scene. If λ < η then we decide the null hypothesis was true. The case when λ exactly equals the threshold is usually assigned to one of the hypotheses. As we said η may be calculated such that the probability of false alarm is exactly equal to our predetermined acceptable false alarm rate α. One may also determine η experimentally by performing the likelihood ratio test for several possible η values and then choosing whichever value gives the best possible probability of detection when the false alarm rate is α or less. For those who are familiar with estimation and detection this process is usually done in conjunction with the formation of the receiver operating characteristics (ROC) curve [37]. We note here that if Y is a discrete-time random process it is a simple task to define and express p 1 and p 0 explicitly. The discrete case corresponds to our data depending on a discrete set of slow-times and fast-times. However, if the measurements depend on continuous-time variables writing an explicit expression for the probability density functions is not possible. We are required to use the Radon- Nikodym derivative of the probability measure P 1 with respect to the measure P 0 in order to calculate λ in this case. As seen in the previous chapter we assume that our data depends on some interval of the real line for both slow-time and fast-time when deriving the backprojected image. We will therefore need to express λ in the continuous case. This issue will be discussed directly in the next section. We now consider the second task of estimating unknown parameters describing Y. In particular we will be interested in estimating the location of a possible target. In this problem we will again need to define probability distributions to describe the data but now the distributions will vary depending on the unknown parameter, which we denote θ. We assume that θ lies in some set or space of possible values called the parameter space which we denote Λ. We will focus specifically on the maximum likelihood estimation technique which aims to find the value of θ which makes our observation Y most likely. Note we now write the probability density function associated with Y with the notation p θ (Y ). We then choose our estimate of θ in the following manner { } ˆθ ML = arg max p θ(y ) θ Λ (3.6)

32 21 where ˆθ ML is the maximum likelihood estimate of θ. Note this is equivalent to maximizing the function log(p θ (Y )). This quantity, log(p) is simpler to compute than p when p has an exponential form, for example in the case of a Gaussian random process The Generalized Likelihood Ratio Test We now describe the joint detection and estimation procedure known as the generalized likelihood ratio test [37, 39]. In general we now have the hypothesis testing problem H 0 : Y p θ0 H 1 : Y p θ1 (3.7) where θ 0 and θ 1 are two unknown parameters and p θ0 and p θ1 are the probability density functions of the data under the null and alternative hypotheses respectively. The goal is to decide which hypothesis is true and also to estimate the corresponding unknown parameter θ 0 or θ 1 depending on which hypothesis is true. In the radar case we let θ 0 = 0, since the target is not present under H 0 and there is no parameter to estimate in this case. Therefore our hypothesis testing problem has the form H 0 : Y p H 1 : Y p θ. (3.8) Comparing this to equation (3.3) we see that n p and that θ is related to the target data, d. We already know what form the expression for d has so the true unknown will end up being the target reflectivity itself, T. Later we will assume a specific type of scatterer to further simplify the unknown we wish to estimate. We first calculate the likelihood ratio for our data and then we maximize the resulting test statistic over all possible θ Λ in order to obtain our estimate ˆθ ML.

33 22 Mathematically we calculate λ θ = p θ (3.9) p { } ˆθ ML = arg max λ θ. (3.10) θ Λ We would then make the choice H 0 is true or H 1 is true using the statistic λˆθml the test (3.5). Note this hypothesis testing problem is known as a detection problem in the presence of unknowns. It is important to observe that the GLRT is not optimal as the likelihood ratio test is optimal in the standard detection case. We define an optimal test as one that is uniformly most powerful. It is well known that this ideal test exists when λ(y ) is monotonically increasing or decreasing [37]. In practice our test statistic is not usually monotonic therefore it is very challenging to find this optimal test. Therefore it is often necessary for us to rely on a suboptimal approach. We choose to utilize the GLRT because if a uniformly most powerful test does indeed exist for our hypothesis testing problem it will coincide with the GLRT. Also we note that the GLRT is asymptotically uniformly most powerful as the number of observations approaches infinity. in 3.2 Continuous GLRT As noted previously typically we restrict ourselves to problems using discretetime in order to be able to explicitly express the probability density functions of the data under each hypothesis and hence express the test statistic λ. There are some applications in which the observations are best modeled as a continuous-time random process. In particular if one recalls from the previous chapter we defined our data in terms of continuous fast-time and slow-time parameters. Therefore we find that extending the standard GLRT for continuous-time random processes is necessary in order for us to compare the GLRT with the backprojection method. We define a continuous-time random process Y as a collection of random variables {Y (t), t [0, T ]} indexed by a continuous parameter t. We have chosen the observation interval, or index set S = [0, T ] for simplicity. We describe the extension

34 23 for measurements dependent on two time parameters in the following section. We begin by describing the standard treatment of continuous-time random processes in detection and estimation schemes [37] General Continuous-Time Random Processes In the continuous-time case the observation space Γ becomes a function space. We therefore must consider the hypothesis testing problem H 0 : Y P H 1 : Y P θ (3.11) where P and P θ are probability measures as opposed to probability density functions in the discrete-time case. In order to perform the detection and estimation task as before we must define these families of densities on function spaces. We note that a density is a function that can be integrated in order to calculate probabilities. Thus it is necessary for us to choose a method of integration on function spaces; we focus on the Lebesgue-Stieltjes integral. This integral is a type of Lebesgue integral with respect to the Lebesgue-Stieltjes measure. An obvious example of such a measure is a probability measure. We will look at this example in order to better understand this type of integral since integration with respect to probability measures is familiar. We assume that we have a probability measure µ on the observation space (Γ, G). We let X be a measurable function from (Γ, G) to (R, B), where B denotes the Borel sets of R (i.e. the smallest σ field containing all intervals (a, b], a, b R). We may then define the following probability distribution, P X on (R, B): P X (A) = µ(x 1 (A)), A B (3.12) where X 1 (A) = {y Γ X(y) A}. Clearly X is a random variable and we can demonstrate a Lebesgue-Stieltjes integral by taking the expectation of X. Taking

35 24 the expectation is often thought of as averaging X weighted by µ. We have E[X] = X(y)µ(dy) = Xdµ. (3.13) Γ This is a Lebesgue-Stieltjes integral. We now are be able to define the idea of a probability density which is needed to calculate the likelihood ratio. We first state the fellowing definition: Definition: Absolute continuity of measures [7]. Suppose that µ 0 and µ 1 are two measures on (Γ, G). We say that µ 1 is absolutely continuous with respect to µ 0 (or that µ 0 dominates µ 1 ) if the condition µ 0 (F ) = 0 implies that µ 1 (F ) = 0. We use the notation µ 1 µ 0 to denote this condition. We also now quote the Radon-Nikodym theorem which will be key in defining the probability densities. The Radon-Nikodym Theorem [7]. Suppose that µ 0 and µ 1 are σ finite measures on (Γ, G) and µ 1 µ 0 then there exists a measurable function f : Γ R such that µ 1 (F ) = fdµ 0, (3.14) for all F G. Moreover f is uniquely defined except possibly on a set G 0 with µ 0 (G 0 ) = 0. F We call the function f the Radon-Nikodym derivative of µ 1 with respect to µ 0 and write it as f = dµ 1 /dµ 0. Now recall we introduced the Radon-Nikodym derivative in order to express explicitly the probability densities p and p θ. Using the Radon-Nikodym theorem we see that if there exists a σ finite measure µ on (Γ, G) such that P µ and P θ µ for all θ Λ we may express the densities as p = dp/dµ and p θ = dp θ /dµ, θ Λ. Note that P and P θ are the probability measures associated with p and p θ respectively. We observe that we can always define a measure µ such that µ = P +P θ.

36 25 Therefore there always exists µ on (Γ, G) such that P µ and P θ µ. Thus we can define our densities with respect to µ as suggested, i.e. p = dp/dµ and p θ = dp θ /dµ, θ Λ. Now to perform the GLRT we still need to write explicitly the likelihood ratio p θ /p. In order to compute this quantity we require the additional condition that P θ P in order to use the Radon-Nikodym theorem again and have that the Radon-Nikodym derivative of P θ with respect to P exists. It may be shown [37] that for any µ dominating both P θ and P that we have dp θ dp = dp θ/dµ dp/dµ = p θ p. (3.15) We conclude that the likelihood ratio is simply the Radon-Nikodym derivative of P θ with respect to P when P θ P. We now briefly remark on the case when P θ is not absolutely continuous with respect to P. In this case there exists a set F such that P (F ) = 0 and P θ (F ) > 0. The extreme case when we have P (F ) = 0 and P θ (F ) = 1 is known as orthogonality of P and P θ. This case is important to observe in light of detection theory because in this case we say the hypothesis testing problem (3.7) is perfectly detectable. We therefore have the following steps to consider when given a hypothesis testing problem as in (3.7). First determine if the two measures are orthogonal; in this case we are finished with the detection task and no processing of the data is necessary. If this is not the case we then must determine if P θ P. If so we then can calculate the Radon-Nikodym derivative as the likelihood ratio and perform the steps in the GLRT. 3.3 Reproducing-Kernel-Hilbert-Space Representations of Continuous-Time Random Processes We now move on to discuss Parzen s reproducing kernel Hilbert space treatment of the estimation and detection tasks with continuous-time random processes [8]. The hope is to find a condition which guarantees that the measures P and

37 26 P θ are absolutely continuous with respect to each other. If this condition is found one is able to guarantee the existence of the Radon-Nikodym derivative of P θ with respect to P and in turn write out an explicit expression for the likelihood ratio. His technique begins by approximating a continuous-time process with a discrete-time process. He then develops the condition for when the limit of the discrete-time likelihood ratio exists and is equivalent to the continuous-time likelihood ratio. Parzen is then able to give explicit formulas for the likelihood ratio in the case of Gaussian random processes. It turns out that the main condition that must be satisfied is that the signal we are trying to detect must lie in the reproducing kernel Hilbert space with reproducing kernel given by the covariance kernel of the noise process. We will outline the main details of his work below. In order to calculate the likelihood ratio in the continuous case one begins by approximating with the discrete finite dimensional case. W consider the case when the index set is a finite subset of S; that is, we let S = (t 1,..., t n ) S. (3.16) We then define the hypothesis testing problem on this discrete subset S as, H 0 : Y P S (3.17) H 1 : Y P θ,s, (3.18) where P S and P θ,s denote the probability measures associated with [Y (t), t S] under H 0 and H 1 respectively. In addition we assume that P θ,s P S and therefore the Radon-Nikodym derivative, or likelihood ratio, exists on this subset and is given by λ S = dp θ,s. (3.19) dp S This assumption that P θ,s P S is not strong as we are typically able to write out the probability density functions and hence the likelihood ratio whenever S is

38 27 discrete. Parzen then shows that if the quantity known as the divergence, given by J S = lim S S J S = lim S S E H 1 (log λ S ) E H0 (log λ S ), (3.20) is finite, then the probability measures defined on the full continuous index set S satisfy P θ P. Therefore the Radon-Nikodym derivative of P θ with respect to P exists. In the case that the divergence is finite Parzen also showed we may calculate the Radon-Nikodym derivative or likelihood ratio λ via as the limit above exists in this case. λ = dp θ dp = lim S S λ S (3.21) Now let us consider the case when the data has a Gaussian distribution. In particular we consider the hypothesis testing problem H 0 : m(t) = n(t) (3.22) H 1 : m(t) = M(t) + n(t), (3.23) where m is the measured data, n is additive noise, and M is the signal we wish to detect. In the radar case M would correspond to data received from a target. Note that we assume that n is Gaussian with zero-mean and covariance kernel given by K(t, t ) = E[n(t)n(t )]. (3.24) Therefore we have that m under H 1 is also a Gaussian random process with covariance kernel K but with mean-value M in this case. We begin by assuming that the time variable t resides in a continuous index set S = (0, ). Next it is assumed [8] that we can approximate S with a discrete subset S. It is well known that the likelihood ratio, given a discrete-time random Gaussian noise process, can be written as log λ S = (m, M) K,S 1/2(M, M) K,S (3.25)

39 28 where S is the discrete index set described above and the inner product (, ) K,S is given as follows (f, g) K,S = f(t)k 1 (t, t )g(t ) (3.26) t,t S for any two functions f, g defined on S [8, 37]. In this case Parzen has shown that the divergence is finite, J S <, if and only if lim S S(M, M) K,S <. Therefore we have that P θ P and consequently we can calculate λ if (M, M) K,S approaches a limit as S S. Parzen has also shown that the functions M having this property are elements of the reproducing kernel Hilbert space with reproducing kernel K, denoted H(K). We now discuss briefly the reproducing kernel Hilbert space H(K) and how Parzen uses this theory to calculate the likelihood ratio. First note that if K is the kernel of a random process {m(t), t S} it may be shown that there exists a unique Hilbert space denoted H(K) which satisfies the following definition. Definition - Reproducing kernel Hilbert space [8]. A Hilbert space H is said to be a reproducing kernel Hilbert space, with reproducing kernel K, if the members of H(K) are functions on some set S, and if there is a kernel K on S S having the two properties: for every t S, where K(, t) is the function defined on S, with value at s S equal to K(s, t), K(, s) H(K) (g, K(, s)) K = g(s), g H(K) where (, ) K denotes the inner product on H(K). Again we mention that Parzen found that lim (M, M) S K,S < if and only if M H(K) (3.27) S

40 29 and also we may say in this case that lim (M, M) S K,S = (M, M) K.S. (3.28) S That is, we may calculate the inner product between two elements of H(K) using the limit (3.28). Using this fact we may use the following result to calculate λ. Theorem - Radon-Nikodym derivative [8]. Let P θ be the probability measure induced on the space of sample functions of a time series {m(t), t S} with covariance kernel K and mean value function M. Assume that either (i) S is countable or (ii) S is a separable metric space, K is continuous, and the stochastic process {m(t), (t) S} is separable. Let P be the probability measure corresponding to the Gaussian process with covariance kernel K and with zero mean. Then P θ and P are absolutely continuous with respect to one another, or orthogonal, depending on whether M does belong or does not belong to H(K). If M H(K) then the Radon-Nikodym derivative of P θ with respect to P is given by { λ[m(t)] = exp (m, M) K 1 } 2 (M, M) K (3.29) where (, ) K denotes the inner product on H(K). We remark that in words a function is an element of the reproducing kernel Hilbert space if it is as smooth as the noise. If this is the case we have that P θ P and we can calculate λ using the above theorem. If this is not the case then P θ and P are orthogonal and M is perfectly detectable. We quote another result of Parzen s which will be key in our calculation of the likelihood ratio in the following section. These next set of results outline how one describes the elements of H(K) which we will use in order to determine if M H(K) and hence determines if we can calculate λ. Integral Representation Theorem [8]. Let K be a covariance kernel. If a measurable space (Q, B, µ) exists, and in the Hilbert space of all B measurable

41 30 functions on Q satisfying (f, f) µ = f 2 dµ < (3.30) Q there exists a family [f(t), t S] of functions satisfying K(t, t ) = (f(t), f(t )) µ = f(t)f(t )dµ (3.31) then the reproducing kernel Hilbert space H(K) consists of all functions g on S which may be represented as g(t) = g f(t)dµ (3.32) Q for some unique function g in the Hilbert subspace L[f(t), t S] of L 2 (Q, B, µ) spanned by the family of functions [f(t), t S]. Note the superscript is simply notation and the bar (or overline) is used to indicate complex conjugation. The norm of g is given by g 2 K = (g, g) K,S = (g, g ) µ. (3.33) If [f(t), t S] spans L 2 (Q, B, µ) then m(t) may be represented as a stochastic integral with respect to an orthogonal random set function [Z(B), B B] with covariance kernel µ: m(t) = f(t)dz (3.34) Q Q E[Z(B 1 )Z(B 2 )] = µ(b 1 B 2 ). (3.35) Further, (g, m) K,S = g dz. (3.36) Q We also quote the integral representation theorem specifically for the case when the random process m is wide-sense stationary.

42 31 Integral Representation Theorem for Stationary Processes [8]. Let S = [t : < t < ] and let [m(t), t S] be a stationary time series with spectral density function f(ω) so that K(t, t ) = e iω(t t ) f(ω)dω. Then H(K) consists of all functions g on S of the form g(t) = G(ω)e iωt dω where G(ω) = g (ω)f(ω) (for some unique function g (ω)), for which the norm g 2 K = G(ω) 2 f(ω) dω is finite. The corresponding random variable (m, g) K can be expressed in terms of the spectral representation of m. If m(t) = e iωt dz(ω) then (m, g) K = G(ω) f(ω) dz(ω). Parzen also extends the integral representation theorem result for the case when we have a discrete set of stationary processes. We may think of this as being analogous to the case of a discrete slow-time parameter. That is, we have the random process [m s (t), < t <, s = s 1,..., s n ] where s i R for all i = 1,..., n. In this case we express the covariance kernel of the noise process as K s,s (t, t ) = E[X s (t)x s (t )] = e iω(t t ) f s,s (ω)dω (3.37) where f s,s (ω) is the spectral density function. We now extend the integral representation theorem for stationary processes to define the elements of H(K). Any g s (t) H(K) defined on s [s 1,..., s n ] and t R is given by g s (t) = G s (ω)e itω dω (3.38)

43 32 where G s (ω) = 1 2π e itω g s (t)dt (3.39) is the Fourier transform of g s and is written as G s (ω) = g (ω)f s,s (ω). Note g (ω) is unique. The norm on H(K) is written as [ g 2 K = sn s,s =s 1 G s (ω)f s,s ] (ω)g s (ω) dω < (3.40) where f s,s (ω) is the inverse of f s,s (ω). We are now left to write the inner product expressions (M, M) K and (m, M) K. Clearly we may use (3.40) to express (M, M) K. For the second term of the likelihood ratio we utilize the integral representation theorem again, that is (m, M) K = s n G s (ω)f s,s (ω)dz s (ω) (3.41) s,s =s 1 where m s (t) = e itω dz s (ω). (3.42) 3.4 The Relationship between the GLRT and Backprojection in SAR imaging We now outline our specific hypothesis testing problem and discuss how to calculate the likelihood ratio. For simplicity, we assume that the object in our scene is a point scatterer with scattering strength C: T (x) = Cδ(x y), where y is the unknown location of the object in our scene. Note that we did not previously address the maximum likelihood estimation step of the GLRT, though the only unknown we defined was the signal itself M. Now we define a specific form for the signal d y which depends on an unknown parameter y. This parameter, y, will be what we estimate in the maximum likelihood step of the GLRT. In this case, (2.36) has the

44 33 form d y (s, t) = F[T ](s, t) = C e iω(t φ(s,y)) A(y, s, ω)dω. (3.43) We note that A depends on y only through the geometrical spreading factors, which are slowly varying. Consequently we neglect the dependence on y and write A(y, s, ω) Ã(s, ω). (3.44) Thus we write d y (s, t) = F[T ](s, t) = C e iω(t φ(s,y)) Ã(s, ω)dω. (3.45) In order to determine whether the target is present, and if it is present determine its location, we consider the hypothesis testing problem H 0 : m(s, t) = n(s, t) H y : m(s, t) = d y (s, t) + n(s, t) (3.46) where m(s, t) is our measured data and n(s, t) is additive white Gaussian noise. Also we assume that our additive white Gaussian noise is stationary in fast-time and slow-time with spectral density S(ω; s 1, s 2 ) = σ 2 δ(s 1 s 2 ). (3.47) Taking the Fourier transform gives us the following covariance kernel K(t 1, t 2 ; s 1, s 2 ) = E[n(s 1, t 1 )n(s 2, t 2 )] = σ 2 e iω(t 1 t 2 ) dωe iα(s 1 s 2 ) δ(s 1 s 2 )ds = σ 2 e iω(t 1 t 2 ) dω. (3.48) for (s 1, t 1 ) and (s 2, t 2 ) S, where S = {(s, t) : s R, t R} is the index set on which our stochastic processes m and n are defined. In addition note that σ is a constant.

45 34 For our GLRT task we wish to detect the presence of T and also estimate its location y, i.e. we wish to find the maximum likelihood estimate of y and calculate the likelihood ratio in order to determine if T is present or not. We express this as follows: λ(y) = p y(m) p(m) y ML = arg(max λ(y)) (3.49) y Λ where Λ is the set of ground locations. In order to decide if a target did exist at location y ML we would compare the statistic λ yml to a predetermined threshold η as in the test defined in (3.5). Recall in the previous section in order to calculate λ for our continuous-time data we must be able to calculate dp y /dp which exists if and only if P y P. We will use the Hilbert space techniques of Parzen s to form an expression for the likelihood ratio and the maximum likelihood estimate of y in terms of reproducing kernel inner products. We summarize our result in the following theorem. Theorem. Given the hypothesis testing problem (3.46) and the definitions (3.45) of the data d y and (3.48) of the noise covariance kernel K respectively we have that the likelihood ratio, or test statistic, for detecting d y is given by the following backprojection operator: λ(y) = KF[T ](y) = e iω(t φ(s,y)) Ã(s, ω)dωm(s, t)dsdt. (3.50) We also have that the maximum likelihood estimate of y is given by, y ML = arg max y Λ λ(y) = arg max y Λ e iω(t φ(s,y)) Ã(s, ω)dωm(s, t)dsdt. (3.51) Proof. We begin by describing the reproducing kernel Hilbert space generated by the covariance kernel of the noise process n. Recall that previously we only considered random processes dependent on a discrete slow-time parameter. Our process

46 35 depends on a continuous slow-time and therefore we must generalize Parzen s results for our situation. For a continuous slow-time process, i.e. [m(s, t), s R, t R], we simply replace the summations in the preceding expressions with integrals over the slow-time parameter. In this case the elements of H(K) are functions g defined on s R and t R of the form g(s, t) = G(s, ω)e itω dω (3.52) where G(s, ω) = g (s, ω)f(ω; s, s ) and f(ω; s, s ) is again the spectral density of the noise process n. If we assume K has the form in equation (3.48) we have that f(ω; s, s ) = σ 2. (3.53) We now note that we may write the data in the form d y (s, t) = e itω Dy (s, ω)dω, (3.54) where D y (s, ω) = Ce iωφ(s,y) Ã(s, ω). (3.55) Therefore we see that d y H(K) and we may use Parzen s result for the Radon- Nikodym derivative to compute λ(y). If we extend the expression for (m, d y ) K for continuous slow-time we have (m, d Dy (s, ω) y ) K = dz(ω; s)ds f(ω; s, s) ( Ce iωφ(s,y) ) Ã(s, ω) = e itω m(s, t)dt dωds σ 2 = C e iωφ(s,y) Ã(s, ω)m(s, ω)dωds (3.56) σ 2 where M(s, ω) is the fast-time Fourier transform of m(s, t) and also observe we have written this statement in terms of a single slow-time s as our process is stationary in slow-time. We may use similar steps to evaluate the other term in the likelihood

47 36 ratio ( d y, d y ) K. We find that ( d y, d y ) K = 1 σ 2 CÃ(s, ω) 2 dsdω. (3.57) Thus using Parzen s theorem we find that λ(y) = C σ 2 e iωφ(s,y) Ã(s, ω)m(s, ω)dωds + 1 σ 2 CÃ(s, ω) 2 dsdω. (3.58) Note that the second term of (3.58) does not depend on the unknown parameter y and therefore does not provide any information for our estimation and detection task. We can therefore neglect the second term of (3.58) and obtain the following expression for the test statistic at each possible location y: λ(y) = e iωφ(s,y) Ã(s, ω)m(s, ω)dsdω. (3.59) If we take the inverse Fourier transform of M(s, ω) we then obtain the time domain version of the test statistic: λ(y) = e iω(t φ(s,y)) Ã(s, ω)dω m(s, t) dsdt. (3.60) We see that (3.60) is a special case of (2.38), where in (3.60) we have used m rather than d to denote the collected data. In (3.60), the filter Q of (2.38) is the matched filter, namely the complex conjugate of the amplitude Ã. We have shown that, with this choice of filter, the FBP image is equivalent to the test statistic calculated at each possible location y. We observe that one may think of the values of the test statistic for each y as a value assigned to a pixel. All these pixel values can be plotted to obtain a corresponding test statistic image, which as we have shown, is equivalent to a filtered-backprojection image formed with a matched filter. The final step of the detection and estimation problem is to estimate the unknown y. We achieve this simply by maximizing the above expression over all

48 37 possible y X, i.e. y ML = arg max y Λ λ(y) = arg max y Λ e iω(t φ(s,y)) Ã(s, ω)dω m(s, t) dsdt. (3.61) This completes the proof. We remark that this result can easily be extended to the case when our additive noise is colored by replacing σ 2 with a spectral density function f(ω). In this case we obtain a similar result to above, but instead the filter is a matched filter in conjunction with a whitening filter, as one would expect. It is also important to observe that this result holds only for the case when ( d y, d y ) K is not dependent on y, that is, the filter energy 1 σ 2 C Ã(s, ω) 2 dsdω does not depend on the unknown parameter y. If this were not the case, which it often is not, one would not obtain simply a backprojection image expression for the test statistic λ(y). We conclude our study of the relationship of backprojection and the generalized likelihood ratio test with a discussion of the consistency property of the maximum likelihood estimate y ML. Recall that in backprojection we can use the pseudolocal property of the image-fidelity operator to guarantee that the target appears at the correct location y. It is well known [37] that y ML y in probability as the number of measurements approaches infinity. We observe that in reality our data depends on a finite number of measurements, that is, t and s do not really span the entire real line. Therefore our image-fidelity operator is an approximate pseudodifferential operator. However as the number of measurements approaches infinity we will have an exact pseudodifferential operator. Similarly in the GLRT as the intervals on which t and s take values approaches the entire real line our maximum likelihood estimate of y converges to the true location in probability. This is known as consistency of the estimate. We note that further work is needed to truly understand why such similar results come from markedly different theories. This leads one to assume that use of both analysis (microlocal analysis in particular) and statistics is necessary and may lead to new breakthroughs in imaging capabilities.

49 CHAPTER 4 Polarimetric synthetic-aperture inversion for extended targets in clutter 4.1 Introduction In this second body of work we consider the task of developing an imaging algorithm specifically for extended targets (curve-like, edges). Our goal is to create a model that reflects the directional scattering of edges. We also choose to work with a polarimetric radar system so that all sets of polarimetric data are available for the reconstruction task. The specific polarimetric radar system we consider includes two dipole antennas mounted on an aircraft. Each antenna has a linear polarization orthogonal to the other. Also both antennas are used for transmission and reception and in this way one is able to collect four sets of data, one for each transmitter and receiver pair. This type of system is unique in that it includes the polarization state of the electromagnetic waves in the model of the wave propagation. This model is derived from a full vector solution of Maxwell s equations. This is different from standard SAR models of wave propagation in which one assumes the scalar wave equation is sufficient. A key difference between standard SAR and polarimetric SAR is how one describes the scatterers present in the scene of interest. In standard scalar SAR a scatterer is described by a scalar scattering strength, or a reflectivity function. This assumption indicates that each complex scatterer is made up of point scatterers. In polarimetric SAR the scatterers are described by a scattering vector (a 4 1 vector in particular), and therefore its scattering strength is dependent on the polarization states of the antennas used for transmission and reception. This may be thought of as describing each complex scatterer as a collection of dipole elements [27]. In addition when one uses the standard scalar wave equation model there is only one polarimetric channel of data available for use in the reconstruction. A polarimetric system enables one to incorporate all polarimetric channels of data and therefore provides more information for the reconstruction scheme. 38

50 39 Most current work in polarimetric radar, or polarimetry, is not focused on imaging algorithms. It is assumed that one may obtain an image of each element of the scattering vector describing the object of interest from the corresponding data set using standard SAR imaging algorithms. That is, if one antenna s polarization state is denoted a and the second b, we may reconstruct the element of the scattering vector denoted S a,b from the data set collected when antenna a is used for transmission and antenna b for reception. One goal in polarimetry is target detection which focuses on applying detection and estimation schemes to polarimetric images [16, 17, 18, 19, 20, 22, 23, 24, 26]. There is also another body of work in polarimetry aimed at other applications such as geographical or meteorological imaging. This work focuses on estimating parameters that distinguish types of distributed scatterers such as foliage or rain droplets. It has been found by many researchers [18, 20, 21, 26, 41] that a single scattering vector does not describe these distributed scatterers adequately. These scatterers, like foliage or vegetation, are prone to spatial and/or time variations and therefore a correlation, or covariance matrix is needed to describe them. In this work we will focus on man-made targets. In particular we investigate the optimal backprojection imaging operator for polarimetric SAR data. We assume that objects in the scene of interest are made up of the dipole elements described above. The actual scattering vector for any object is assumed to be a second-order random process. In addition, measurement noise is included in the data model as a second-order process. The object of interest is assumed to display directional or anisotropic scattering behavior, therefore modeling an edge or curve. This object may be thought of as an edge of any manmade object, for example a vehicle. We make the assumption that any individual dipole element making up the curve is only visible when the radar look direction is perpendicular to the orientation of that dipole element. This assumption serves to incorporate directional scattering and also to make a distinction between scatterers that make up the object of interest and other scatterers present in the scene (i.e. clutter). The clutter scattering behavior is assumed to be isotropic. This directional scattering assumption is strong but it enables us to write an analytic inversion

51 40 scheme. We will discuss the validity of this assumption more in section 4.6. The imaging technique used is an extension of the algorithm in [15] for the vector case. A filtered-backprojection type reconstruction method [1, 2] is used where a minimum mean-square error (MSE) criterion is utilized for selecting the optimal filter. We found that it is optimal to use what is called a coupled filter, or a fully dense filter matrix. This differs from the standard polarimetric SAR imaging algorithms which assume that a diagonal filter (i.e. one reconstructs each element of the scattering vector from its corresponding data set) may be used. We begin with a short introduction to polarimetric radar and the polarization state of electromagnetic waves. We also briefly review the standard radar cross section (RCS) models used for extended targets from the radar literature. We will then consider our specific dipole SAR forward model which stems from a method of potentials solution to Maxwell s equations. This model will be rewritten in similar variables as the standard RCS model in order to make a formal comparison of the two models. We then describe the assumptions made in order to describe the directional scattering behavior of the target and clutter. These assumptions also serve to linearize the forward model so that we are able to write out an analytic inversion scheme. We will describe the imaging process in general and then discuss the optimal filters in the case when target and clutter are statistically independent and also in the case when the two processes are correlated. We will conclude with numerical simulations comparing our imaging scheme with the standard polarimetric channel-by-channel processing method. We will demonstrate that our method improves mean-square error (as it was defined to be optimal in the MS sense), and also the final image signal-to-clutter ratio. We will also see examples where the coupled processing technique helps us to reconstruct the correct target orientation when the standard processing fails in this respect.

52 Polarimetric Concepts When discussing the wave propagation for polarimetric SAR we begin again with Maxwells equations. Recall B(t, x) E(t, x) = t (4.1) H(t, x) = D(t, x) J (t, x) + t (4.2) D(t, x) = ρ (4.3) B(t, x) = 0 (4.4) where E is the electric field, B is the magnetic induction field, D is the electric displacement field, H is the magnetic intensity or magnetic field, ρ is the charge density, and J is the current density. For simplicity we will consider again the same case when Maxwell s equations simplifies to the wave equation for each element of the electric and magnetic field. These assumptions are used solely for the reason of obtaining a wave solution which is simple enough for us to visualize the polarization state. We will go back to the full Maxwell s equations to model the wave propagation in the following section. The simplest solution to the wave equation (for a linear source free homogeneous medium) is known as a plane wave. These waves have constant amplitude in a plane perpendicular to the direction of propagation. We express the electric field of a plane wave as E(r, t) = E(r)cos(ωt) (4.5) where r R 3 is the position vector, k R 3 is the direction of propagation, ω is the angular frequency, and t is time (specifically our fast-time). Note that in particular we call this type of wave a monochromatic plane wave because it varies in time with a single angular frequency. We may also write its representation in a form that is independent of t E(r) = Ee ik r (4.6) where E is a constant-amplitude field vector. It is important to observe that we have defined a right-handed coordinate system, denoted (ĥ, ˆv, ˆk). We have that

53 42 E lies in the plane perpendicular to ˆk and therefore may be written as a linear combination of the basis vectors which define this plane, that is, E = E h ĥ + E v ˆv. (4.7) We may now discuss polarization of waves. This quantity is used to describe the behavior of the field vector in time. If we look specifically in the plane perpendicular to the direction of propagation and let the field vector vary with time it will trace out its polarization state. In general the vector traces out an ellipse, which is known as the polarization ellipse. As shown in Figure (4.1) the shape of the el- Figure 4.1: Linear, Circular, and Elliptical Polarization States lipse varies with polarization state. We have pictured from left to right respectively, linear, circular, and elliptical polarization states. These states are defined by two angles, the orientation angle ψ and the ellipticity angle χ.

54 43 Figure 4.2: The Polarization Ellipse The orientation angle describes the direction, or slant, of the ellipse and takes on values in the range 0 ψ π. The ellipticity angle takes on the values π/4 χ π/4 and characterizes the shape of the ellipse. For example χ = 0 describes the linear states, in particular we have the horizontal state described further by ψ = 0 and the vertical state where in this case ψ = π. For circular polarization we have χ = π/4. All other ellipticity angles describe the various elliptical states. We now discuss briefly the polarimetric scattering scenario. Again for simplicity we will assume that our transmitting antenna transmits a fully polarimetric monochromatic plane wave denoted E i. This wave has propagation direction k i and its field vector is written in terms of the basis vectors defining the plane perpendicular to k i. That is, E i = Ehĥi i + Ev i ˆv i. (4.8) As in standard SAR this wave will interact with the target, or scatterer, and the wave speed will change. In addition the polarization state and/or degree of polarization may change due to this target interaction. We will focus on characterizing this change now and will add in the wave speed change, or reflectivity function, when we write our full forward model. Now we assume that the wave which scatters off the target is received at the antenna which lies in direction k s, in the far-field of the object. We therefore may express the scattered field vector in terms of the basis vectors defining the plane

55 44 perpendicular to k s. That is E s = E s hĥs + E s v ˆv s. (4.9) Note that the right-handed coordinate system (ĥs, ˆv s, ˆk s ) is not typically the same as the coordinates (ĥi, ˆv i, ˆk i ). This process, which takes E i and returns E s, is thought of as a transformation performed by the scatterer. We describe this transformation mathematically as E s = [S]E i = S hh S vh S hv S vv E i, (4.10) where [S] is known as the scattering matrix for the scatterer present in the scene. This will be incorporated into the quantity we reconstruct later when we discuss polarimetric imaging. Note that for a given frequency and scattering geometry [S] depends only on the scatterer, however it does depend on the basis we use to describe the waves. Also we remark that in polarimetric SAR we attempt to measure the scattering matrix by transmitting two orthogonal polarizations on a pulse-to-pulse basis and then receiving the scattered waves in the same two orthogonal polarizations. In our specific case we will perform this task with two orthogonal dipole antennas, denoted a and b. 4.2 Radar Cross Section for Extended Targets In the preceding section we have discussed how to model the polarization change when a field interacts with a target. Also in chapter two we discussed modeling the change in wave speed with the scalar reflectivity function. We will describe a third way of characterizing a target known as the radar cross section. This quantity is most common in radar literature and preferred by radar engineers. For this reason we will outline the accepted radar cross section for our extended or curve-like targets and we will also compare our scattering model to this after our model is explained in section 4.3.

56 Radar Cross Section and Polarimetric Radar Cross Section We begin by describing the radar equation which may be used to express the interaction between the incident field, the target, and the receiving antenna. It is written in terms of the power the target absorbs, or intercepts, from the incident wave and then reradiates to the receiving antenna. Mathematically we express this relationship as P R = P T G T (θ, φ) σ A er(θ, φ) 4πrT 2 4πrR 2 where P R is the power detected at the receiving antenna, P T (4.11) is the transmitted power, G T is the transmitting antenna gain, A er is the effective aperture of the receiving antenna, and r T and r R are the distances between the target and the transmitting and receiving antenna respectively. Also we have the spherical angles θ and φ that describe the azimuth and elevation angles of observation. We note that one may arrive at the radar equation directly from our standard SAR forward model given in equation (2.25). The RCS is given by the quantity σ. It is defined as the cross section of an equivalent idealized isotropic scatterer that generates the same scattered power density as the target in the observed direction. We may express σ in the form σ = 4πr 2 Es 2 E i 2. (4.12) Observe that σ depends on the frequency transmitted, the polarization state of the wave transmitted, the flight path or antenna placement, and also the target s geometry and dielectric properties. We are mainly concerned with its polarization dependence so we will discuss that in more detail now. We define the polarization-dependent RCS as σ qp = 4πr 2 Es q 2 E i p 2 (4.13) where p is the polarization state of the transmitted field and q is the polarization of the scattered field. We now recall the expression for the polarization scattering process (4.10). In the literature the relationship between the radar cross section and

57 46 the scattering matrix elements is defined as σ qp = 4π S qp 2. (4.14) Note that this expression neglects to include all terms of the scattered field. For example, if we use the standard (ĥ, ˆv) basis we have that Es h = S hhe i h + S hve i v. However for equation (4.14) to hold it must be assumed that Eh s S hheh i. In our scattering model we choose not to make this assumption. This will be the main difference between our model and the accepted model from the radar literature. This point will be discussed in more detail in the following section Method of Potentials Solution of Maxwell s Equations We now go on to discuss how to arrive at the specific expressions for the electric fields E i and E s. We begin with Maxwell s equations in the frequency domain: E(ω, x) = iωb(ω, x) (4.15) H(ω, x) = J(ω, x) iωd(ω, x) (4.16) D(ω, x) = ρ(x) (4.17) B(ω, x) = 0. (4.18) First note that since the magnetic induction field B has zero divergence it may be expressed as the curl of another field A. We call A the vector potential and write B = A. (4.19) We insert (4.19) into (4.15) to arrive at (E(ω, x) iωa(ω, x)) = 0. (4.20)

58 47 Now we use another fact from vector calculus, that is, a vector field whose curl is zero can be written as the gradient of a potential. Mathematically E iωa = Φ (4.21) where Φ is called the scalar potential. We rewrite this to express the electric field as E = Φ + iωa. (4.22) We now assume that our medium is free space and therefore the free space constitutive relations hold. That is, we have D = ɛ 0 E (4.23) B = µ 0 H. (4.24) Using these and also (4.22) in (4.16) and (4.17) we arrive at a system of equations for A and Φ. We have (µ 0 A) = J iωe = J iωɛ 0 ( Φ + iωa) (4.25) (ɛ 0 E) = ɛ 0 (iωa Φ) = ρ. (4.26) We pause here to discuss an issue with the definitions of A and Φ. Observe that if one adds the gradient of any scalar field, say ψ, to A, the physical magnetic induction field will not change because the curl of ψ is always zero. Also if one adds the quantity (iωψ) to Φ then E will remain unchanged. Therefore the transformation A A + ψ Φ Φ + iωφ (4.27) does not affect E and H. This is called a gauge transformation. In order to solve for our fields we must add an additional constraint to the system of equations for A and Φ. We will use the constraint known as the Lorenz gauge, which states, A iωɛ 0 µ 0 Φ = 0.

59 48 Now returning to our system of equations for A and Φ we begin to solve by using the triple-product (or BAC-CAB identity) in (4.25). We have [ ] [ ] µ 1 0 ( A) = µ 1 0 ( A) 2 A = J iωɛ 0 µ 0 (iωa Φ). (4.28) Rearranging terms gives us 2 A + ω 2 ɛ 0 µ 0 A = µ 0 J iωɛ 0 µ 0 Φ + ( A). (4.29) Next we use the definition k 2 = ω 2 ɛ 0 µ 0 and write 2 A + k 2 A = µ 0 J + ( A iωɛ 0 µ 0 Φ). (4.30) Note that the expression in parentheses is exactly the Lorenz gauge constraint. Therefore this expression simplifies to the Helmholtz equation 2 A + k 2 A = µ 0 J. (4.31) Now we move on to solve for Φ. To find the expression for Φ we begin with equation (4.26), which we restate here ɛ 0 (iωa Φ) = ρ. (4.32) From the Lorenz gauge we have that A = iωɛ 0 µ 0 Φ, which gives us iωɛ 0 (iωɛ 0 µ 0 Φ) ɛ 0 ( Φ) = ρ. (4.33) This expression may be rewritten in terms of k again as 2 Φ + k 2 Φ = ρ/ɛ 0. (4.34) Therefore we see that solving the Maxwell s equations and finding the electric and magnetic field expressions reduces to solving two uncoupled Helmholtz equations

60 49 in free space. We will focus from this point on specifically on cylindrical extended targets for simplicity. This type of extended target is dealt with extensively in radar cross section literature and we will ultimately compare our forward model with this accepted RCS model. It is common to solve the Helmholtz equation in cylindrical coordinates in order to find the expressions for our vector and scalar potentials when one considers a target that is cylindrical in shape. This solution of the Helmholtz equation will then be used to calculate E and H. Once we have the expression for the electric field, we may write down the radar cross section for our target Helmholtz Equation in Cylindrical Coordinates As shown above, finding the expression for the electric (and hence magnetic) field amounts to solving the Helmholtz equation. We begin by considering the scalar Helmholtz equation in cylindrical coordinates in a source-free region. This equation is given by ( 1 ρ ψ ) ψ ρ ρ ρ ρ 2 φ + 2 ψ 2 z + 2 k2 ψ = 0. (4.35) We define ρ, φ, and z in Figure (4.3) depicting standard cylindrical coordinates. To solve this partial differential equation we will use the method of separation of variables and hence look for a solution of the form: ψ = R(ρ)Φ(φ)Z(z). (4.36) Without going through all the intermediary calculations we arrive at the following set of separated equations for R, Φ, and Z ρ d dρ ( ρ dr ) + [(k ρ ρ) 2 n 2 ]R dρ = 0 (4.37) d 2 Φ dφ + 2 n2 Φ = 0 (4.38) d 2 Z dz 2 + k2 zz = 0 (4.39) where n is the separation constant and we have the relation k 2 ρ + k 2 z = k 2. We

61 50 Figure 4.3: Cylindrical Coordinates first observe that equations (4.38) and (4.39) are simply harmonic equations and therefore the solutions are given by Φ = h(nφ) and Z = h(k z z) where h is any harmonic function. Now equation (4.37) is a Bessels equation of order n. We will denote the solution of the equation as B n (k ρ ρ). It is well known that in general B n (k ρ ρ) is a linear combination of any two linearly independent Bessel functions. We have now that a solution of the Helmholtz equation in cylindrical coordinates is given by the elementary wave function ψ kρ,n,kz. These functions are written ψ kρ,n,k z = B n (k ρ ρ)h(nφ)h(k z z). (4.40) The general solutions are therefore linear combinations of these functions (4.40). The general solution is given as a sum over all possible values of n and k z (or n and k ρ ), i.e. ψ = n C n,kz ψ kρ,n,kz = C n,kz B n (k ρ ρ)h(nφ)h(k z z), (4.41) k z n k z where C n,kz are constants. We may also obtain general solutions which integrate

62 51 over all possible k z (or k ρ ) when these quantities are continuous. We note that n is usually discrete and therefore we will continue to sum over n values. In this case the general solution is given by ψ = n k z f n (k z )B n (k ρ ρ)h(nφ)h(k z z)dk z (4.42) where the integration is over any contour in C when k z C, or any interval in R when k z R. The functions f n (k z ) are analogous to the constants C n,kz and are obtained from the boundary conditions imposed in a specific propagation and/or scattering scenario. In order to fully define these solutions we must choose appropriate Bessel and harmonic functions. We will first choose the harmonic functions h(nφ) = e inφ and h(nk z ) = e ikzz as they are linear combinations of both sine and cosine. For Bessels equation we may choose functions based on the behavior we expect to see at ρ = 0 or ρ. If we seek a solution that is non-singular at ρ = 0 then we are required to select Bessels functions of the first kind, denoted J n (k ρ ρ). If instead we seek a solution which decays as ρ, i.e. outward-traveling waves, we will choose Hankel function of the second kind, denoted H n (2) (k ρ ρ). Now we move on to write expressions for E and H in terms of these elementary wave functions ψ. Using the method of potential solutions in cylindrical coordinates we may write the elements for a field polarized along the z-axis, also known as TM to z (i.e. no H z element) as E ρ = 1 2 ψ iωɛ ρ z (4.43) E φ = 1 2 ψ ρiωɛ φ z (4.44) E z = 1 iωɛ ( 2 z + 2 k2 )ψ (4.45) H ρ = 1 ψ ρ φ (4.46) H φ = ψ ρ (4.47) H z = 0. (4.48)

63 52 One may express any field TM to z in terms of these solutions when in a source-free region. Similarly we may express a field orthogonally polarized, known as TE to z (i.e. no E z component) as H ρ = 1 2 ψ iωµ ρ z (4.49) H φ = 1 2 ψ ρiωµ φ z (4.50) H z = 1 iωµ ( 2 z + 2 k2 )ψ (4.51) E ρ = 1 ψ ρ φ (4.52) E φ = ψ ρ (4.53) E z = 0. (4.54) Any TE field may be expressed in terms of these solutions. Also note we may express any arbitrarily polarized field as a superposition of the TM and TE fields above Scattering in Two Dimensions We now consider an example scattering problem in two dimensions. We begin with this simple scenario and build off these solutions to obtain the RCS of our extended target in three dimensions. We assume that we have an incident plane wave which scatters off an infinitely long cylinder lying along the z-axis. For example we assume that the incident field is z-polarized (TM to z), therefore we have that E i z = E 0 e ikρ cos φ = E 0 n= i n J n (kρ)e inφ. (4.55) For details on how to obtain this expression for the incident field see [31]. The total field is the sum of the incident and scattered fields which is expressed mathematically as E z = Ez i + Ez. s (4.56) We assume that our solution is composed of outward-traveling waves and therefore

64 53 Figure 4.4: Scattering scenario with infinite length cylinder lying along the z-axis, incident field is normal to the cylinder [28] we have that the scattered field must include the Hankel function of the second kind as described above. Explicitly we write E s z = E 0 n= This gives us the following expression for the total field: E z = E 0 n= i n a n H (2) n (kρ)e inφ. (4.57) i n (J n (kρ) + a n H (2) n (kρ))e inφ. (4.58) Now in order to determine the coefficients a n we must impose a boundary condition. We will assume that for this cylinder the z-component of the electric field is zero on the surface of the scatterer. If we assume that the cylinder has radius a we say that E z = 0 at ρ = a. This boundary condition allows us to solve for the coefficients a n which are given by a n = J n(ka) H (2) n (ka). (4.59) It is also of interest to consider an asymptotic or approximate solution for the scattered field in the far-field of the cylinder. In this case one may utilize asymptotic formulas for H (2) n for kρ. We find in this case that E s z approaches the following

65 54 form: E s z E 0 where a n is defined as in equation (4.59). 2i πkρ e ikρ n= a n e inφ (4.60) We may also consider the case when the incident field has the orthogonal polarization, that is transverse to z or TE to z. In this case we write the incident field as H i z = H 0 e ikx = H 0 n= i n J n (kρ)e inφ. (4.61) Repeating similar steps we obtain the expression for the scattered field in the far field of the cylinder where H s z H 0 2i πkρ e ikρ n= b n e inφ (4.62) b n = J n(ka) H (2) n (ka). (4.63) For more details on the derivation for this incident field see [31] RCS for Infinitely Long Cylinder Note that so far we have not commented on the length of the object. Obviously in our actual scattering scenario the object has finite length. However we will first consider an object of infinite length which simplifies the cross section calculations. We begin with the case when the incident field is normal to the cylinder and then extend for oblique incidence. Finally we will consider the case when the cylinder has finite length Normal Incidence For an infinitely long object one is required to use a variation of the radar cross section called the scattering cross section. It is defined as σ c = 2π lim ρ ρ (Es E s ) (E i E i ). (4.64)

66 55 We consider an infinite cylinder because our target of interest will be significantly longer than it is wide. We first express the scattered fields obtained above in the case when the quantity ka 1. This indicates that as the object length approaches infinity its width becomes infinitesimally small. In this case we use approximate values for the coefficients a n and b n and consider only the first few terms in the series. For the case when the incident field is polarized TM to z we have that the n=0 term is dominant and in that term we use the small argument formula (as ka 1) for H (2) 0 leading to the following expression for the scattered field: E s z = E i z π e i(kρ+π/4) (4.65) 2 kρ(log(2/γka) iπ/2) where γ = is Euler s constant and the logarithm in the denominator arises in the small argument expansion of the Bessel function. In the case when the incident field is polarized TE to z we have that the n = 0, ±1 all contribute significantly leading to the following expression for the scattered field: H s z = H i z π (ka) 2 e i(kρ 3π/4) (1 + 2 cos(φ)). (4.66) 2 2 kρ If we insert these expressions into the definition for the scattering cross section we obtain the following bi-static scattering widths (again for ka 1): σt c M(φ) = π 2 a ka(log 2 (2/γka) + π 2 /4) (4.67) σt c E(φ) = π 2 a[(ka) 3 (1/2 + cos(φ)) 2 ], (4.68) where φ is defined in the diagram in the previous section Oblique Incidence In this case we assume the incident field has a propagation direction in the x z plane and that the cylinder axis still lies along the z axis. The angle φ is the angle between the incident wave s propagation direction and the x y plane. We calculate the scattered field and hence the scattering cross section at a point, say P, which lies in a plane intersecting the z axis and makes an angle φ with the

67 56 x axis. The TM waves lies in the x z plane and the TE wave lies orthogonal to the TM component. Figure 4.5: Scattering Scenario for an infinite length cylinder when the incident field makes an angle φ with the x y plane (oblique incidence) [28] We have that the z components of the H and E fields are given by E T M z = E T M 0 cos(ψ) exp(ik(z sin(ψ) x cos(ψ))) (4.69) H T M z = H T M 0 cos(ψ) exp(ik(z sin(ψ) x cos(ψ))) (4.70) where H T E 0 = ɛ/µe T E 0. We now simply quote the resulting scattering cross sections, again in the case when ka 1. We have σ c T M(φ) = σ c T E(φ) = 4 π 2 a cos(ψ) cos 2 (Ψ) ka cos(ψ)[log 2 (2/γka) + π 2 /4] (4.71) 4 cos 2 (Ψ) π2 a cos(ψ)[(ka cos(ψ)) 3 (1/2 + cos(φ)) 2 ]. (4.72) For more details on calculating the scattered fields and the cross sections see [28].

68 Finite Cylinder RCS We now consider the case of a finite cylinder which is the actual scatterer we consider in our scenario. We assume its length, denoted h, is significantly longer than several wavelengths. This assumption allows us to ignore the resonance effect of the scattered field when the cylinder has a length which is a multiple of a half-wavelength. As the length increases the scattered field appears mainly in the specular direction so we may use the results for the infinitely long cylinder to calculate the scattering cross section for a long, thin cylinder. We essentially assume that the scattered field is the same as for an infinitely long cylinder in the regions very near the cylinder radially and also when z > h. Otherwise we assume the fields are zero. More details may be found in [32]. We are mainly concerned with the mono-static case so we assume Ψ s = Ψ i = Ψ. Figure 4.6 describes the scattering scenario. In this case we obtain the following cross Figure 4.6: Scattering Scenario for a finite length cylinder [28] section expression σ(ψ) = [ ] 2πh 2 cos 2 (γ i ) cos 2 2 (γ s ) sin(2kh sin(ψ)) [log 2 (4.73) (2/γka cos(ψ)) + π 2 /4] 2kh sin(ψ)

69 58 where γ i and γ s are the angles that define the directions of the desired incident and scattered polarization states with respect to the TM planes. 4.3 Dipole SAR Scattering Model We now move on to discuss our mathematical model for scattering. We assume our SAR system is made up of two dipole antennas, a and b which travel along paths γ a and γ b. We assume that dipole a transmits the waveform p a (t), and the scattered field is received on both a and b. Similarly dipole b transmits the waveform p b (t), and the scattered field is received on both a and b. We denote the Fourier transforms of the waveforms by P a and P b. We also assume the dipoles have direction ê a and ê b respectively. We model our object of interest, or target, as a collection of dipoles located at various pixels and with various orientations. We say a given target dipole at location x has orientation, or direction, ê T (x) = [cos θ(x), sin θ(x), 0]. Similarly we model our clutter as unwanted scatterers which are again made up of dipoles at various locations y with orientations ê C (y). We also assume our measurements are corrupted by noise n. Therefore we can say our forward model in the frequency domain is of the form D i,j (k, s) = F T [T i,j ](k, s) + F C [C i,j ](k, s) + n i,j (k, s) (4.74) where i = a, b and j = a, b. We call D i,j the set of the data collected when we transmit on the ith antenna and receive on the jth antenna. Also note T i,j and C i,j are the functions that describe the target and clutter and n i,j is the noise that corrupts the measurements when we transmit on i and receive on j. We now go into more detail to describe the scattering from the dipoles that make up our target and clutter. Please note we use the convention where vectors appear in bold font e.g. x and matrices are underlined e.g. A. Now we will return to the method of potentials solution to Maxwell s equations

70 59 in section Instead of solving the resulting Helmholtz equations (4.31) and (4.38) in cylindrical coordinates we will remain in the standard Cartesian coordinate system and utilize the Green s function solutions as in chapter 2. Therefore we have that A(x) = e ik x y 4π x y µ 0J(y)dy eik x 4π x µ 0 e ik x y J(y)dy }{{} F (k x)=f[j](k x) = eik x 4π x µ 0F (k x) (4.75) where F is the radiation vector. Observe that we have also used the far-field approximation [2] in (4.75). The expression for Φ may be obtained using the Lorenz gauge equation, A iωɛ 0 µ 0 Φ = 0. We then obtain E from A via the following expression Taking x large results in E = iω [ A + k 2 ( A) ]. (4.76) e ik x e ik x E rad (x) = iωµ 0 4π x [F x( x F )] = iωµ 0 [ x ( x F )] (4.77) 4π x where we have used the triple product vector identity. For more details on the farfield approximation utilized here see [2]. We will now adopt this general result for our specific antenna. In the frequency domain, the far-field electric field due to a radiating dipole of length a, located at position γ a (s) and pointing in direction ê a, is E a (k, x) = R ( ) e x,s a ikra Ra x,s ê a where R a x,s = x γ a (s), R a x,s = R a x,s, and x,s 4πR a x,s F a (k R a x,s ê a )P a (k) (4.78) ( ) ka F a (k cos θ) = asinc 2 cos θ (4.79) is the antenna pattern of the dipole a. We calculate the radiation vector for our dipole antenna in Appendix B. We may obtain the time domain version of the

71 60 electric field by taking the Fourier transform; we have E a (t, x) e ikt E a (k, x)dk. (4.80) Recall that we assume that any scatterer in our scene of interest is modeled as a dipole located at position x and pointing in direction ê sc (x) = [cos θ sc (x), sin θ sc (x), 0]. This scatterer may be part of the extended target or a clutter scatterer. Each dipole making up the scatterer acts as a receiving antenna with antenna pattern F sc. We can calculate the current excited on the dipole at position x and pointing in direction ê sc (x), due to the incident field E a. We have I sc ê sc E a F sc (k R x,s a ê sc ) [ ( )] = ê sc Ra x,s Ra x,s ê a F a (k R x,s a ê a )F sc (k R x,s a ê sc ) eikra x,s P 4πRx,s a a (k). (4.81) We assume that the current induced on the dipole radiates again as a dipole antenna, and in this process has strength ρ(x) and again antenna pattern F sc. Thus we obtain the field back at γ b (s): E(k, γ b (s)) ρ(x) eik(ra x,s +Rb x,s ) [ ( )] ( ) ê 16π 2 Rx,sR a x,s b sc Ra x,s Ra x,s ê a Rb x,s Rb x,s ê sc F sc (k R a x,s ê sc )F sc (k R b x,s ê sc )F b (k R b x,s ê b )F a (k R a x,s ê a )P a (k). (4.82) We will assume the measured data is given by the current on the dipole located at

72 61 position γ b (s) with orientation ê b. We calculate this as in equation (4.81). We have D a,b (k, s) ê b E(k, γ b (s)) x,s+rx,s) b = ρ(x) eik(ra F sc (k 16π 2 Rx,sR R a a x,s b x,s ê sc )F sc (k R x,s b ê sc ) F b (k R x,s b ê b )F a (k R x,s a ê a )P a (k) [ ( )] ê sc Ra x,s Ra x,s ê a ê b [ Rb x,s ( Rb x,s ê sc )] dx. (4.83) Here the two subscripts on the left side of (4.83) indicate that we transmit on dipole a and receive on dipole b. locations x in the scene of interest. Also note that we integrate over all possible ground Now recall as in standard SAR that we ultimately aim to have a forward model of the form D = F[T + C] + n or more specifically as in equation (4.74). Ideally this forward operator F T (and also F C ) is a linear operator. In this case it is very simple to calculate analytically the appropriate approximate inverse operator. We observe that in our current form our model is far from linear. Our two unknown quantities are ρ(x) and also ê sc (x). The model is linear in terms of ρ, however ê sc appears as the argument of the radiation patterns (or sinc functions) and also appears in the vector triple products. Note we suppress the dependence of ê sc on x for ease of writing. In order to linearize our model we will ultimately make simplifying assumptions. We first approach the vector product expressions. We will show that no approximation is needed to write this portion of the model in a linear fashion. We note that the vector expressions on the last line of (4.83) can be rewritten with the help of the triple product, or BAC-CAB, identity as [ ( )] [ ê sc Ra x,s Ra x,s ê a ê b Rb x,s ( Rb x,s ê sc )] [( ) = Ra x,s ê sc [ ( )] [ = Ra x,s Ra x,s ê a ê sc Rb x,s ( )] [( ) Ra x,s ê a Rb x,s ê sc ( )] Rb x,s ê b ( Rb x,s ê b )] ê sc (4.84)

73 62 We note, moreover, that again from the BAC-CAB identity, we have ( ) ˆR ˆR ê = ˆR ( ) ( ) [ ˆR ê ê ˆR ˆR = ê ˆR ( )] ˆR ê = P Rê, (4.85) where P R denotes the operator that projects a vector onto the plane perpendicular to R. Thus we can write (4.84) as [(P Rax,s ê a ) ê sc ] [( P Rb x,s ê b ) ê sc ]. (4.86) We observe that we can consider R the direction of propagation. Thus the above operation projects the antenna directions onto the plane perpendicular to the direction of propagation. discussed in section This is precisely the right-handed coordinate system we In addition we may rewrite (4.84) using tensor products, that is we have [ ( )] [ ( )] Ra x,s Ra x,s ê a ê sc Rb x,s Rb x,s ê b ê sc = (Ra Rb ) : (ê sc ê sc ) (4.87) where R = ˆR ( ) ˆR ê, is the standard tensor product, and : is known as the double dot product, where you multiply the entries of the matrices component-wise and sum (the matrix analog to the vector dot product). In addition we may express this operation in a linear fashion. If R a = (x a, y a, z a ) and R b that = (x b, y b, z b ) we have x 2 a x a y a x a y a ya 2 cos 2 θ sc (Ra Rb x a x b x a y b y a x b y a y b cos θ sc sin θ sc ) : (ê sc ê sc ) =. (4.88) x a x b y a x b x a y b y b y b sin θ sc cos θ sc x 2 b x b y b x b y b yb 2 sin 2 θ sc Note that the third coordinate of R is not present above because in the double dot product these are multiplied by the third coordinates of ê sc which are always zero.

74 63 Also note that we now have expressed our forward model in terms of the quantity cos 2 θ cos θ sin θ S(θ) =, (4.89) sin θ cos θ sin 2 θ which is the scattering vector, or the vectorized scattering matrix, from polarimetry literature for a dipole scatterer [16, 26]. We now choose to define our two unknowns as ρ(x) and S(θ sc ). Observe that we have suppressed the dependence of θ sc on x for ease of writing. If we receive on both a and b, we obtain a data matrix consisting of D(k, s) = D a,a D b,a D a,b D b,b, (4.90) or equivalently we have the data vector D(k, s) = D a,a D a,b D b,a D b,b. (4.91) If we make the assumption that antennas a and b are collocated, that is, we assume a monostatic system, we have the following data expression for any dipole scatterer: D(k, s) = e 2ik(Rx,s) where we define A a,a x 2 a A a,a x a y a A a,a x a y a A a,a ya 2 cos 2 θ sc A a,b x a x b A a,b x a y b A a,b y a x b A a,b y a y b cos θ sc sin θ sc ρ A b,a x a x b A b,a y a x b A b,a x a y b A b,a y b y b sc (x) dx sin θ sc cos θ sc A b,b x 2 b A b,b x b y b A b,b x b y b A b,b yb 2 sin 2 θ sc (4.92) A i,j = (1/16π 2 R a x,sr b x,s)(f sc (k R x,s ê s )) 2 F i (k R x,s ê i )F j (k R x,s ê j )P i (k) (4.93)

75 64 for i = a, b and j = a, b Comparison to the extended target RCS model We now pause in the derivation of our forward model in order to comment on how our model compares with the RCS model for an extended object described in section Recall that we have the following expression for the RCS in this case (i.e. finite length cylinder, length much greater than wavelength) [28, 33]: [ ] 2πh 2 cos 2 (γ i ) cos 2 2 (γ s ) sin(2kh sin(ψ)) σ(ψ) = [log 2. (4.94) (2/γka cos(ψ)) + π 2 /4] 2kh sin(ψ) We will now express our model in terms of the angles Ψ, γ i, and γ s in order to make the comparison. We begin by defining a right-handed coordinate system in spherical coordinates. We have the three basis vectors k (analogous to R), ĥ, and v. The last two basis vectors lie in a plane perpendicular to k or R) the direction of propagation and hence define a polarization basis. Explicitly we have k = sin φ cos α x sin φ sin αŷ cos φẑ (4.95) ĥ = sin α x cos αŷ (4.96) v = cos φ cos α x + cos φ sin αŷ sin φẑ (4.97) where we define φ and α as the elevation and azimuth angles describing the direction of observation or propagation. In Figure (4.7) we illustrate this coordinate system: where θ sc is the orientation of the cylinder or scatterer as it was in the previous section. We note that we may write ê sc = cos θ sc x + sin θ sc ŷ. Also it is important to observe here that in terms of our angles we may write the incident direction Ψ = α θ sc as Ψ was defined as the incident wave direction in the plane that the cylinder lies in. Next we write the vectors R a and R b in terms of the right-handed coordinate system. Recall that R = P Rê and therefore each of these vectors lies in the plane perpendicular to R which is precisely the ĥ v plane. We therefore may

76 65 Figure 4.7: Spherical coordinates express these vectors as linear combinations of the two basis vectors R a = A 1 ĥ + A 2 v = R a (cos β ah ĥ + cos β av v) (4.98) R b = B 1 ĥ + B 2 v = R b (cos β bh ĥ + cos β bv v). (4.99) Note we have used the fact that A 1 = Ra ĥ = R a cos β ah, where β ah is the angle between Ra and ĥ. We may perform this same dot product in order to calculate the other coefficients A 2, B 1, and B 2 in a similar fashion. Note that β hi and β vi, i = a, b correspond to the angles γ i and γ s in equation (4.94). We now calculate the quantities (Ri Rj ) : (ê sc ê sc ) for each of the four combinations of i = a, b and j = a, b. We rewrite this quantity in the form (R i R j ) : (ê sc ê sc ) = (R i ê sc )(R j ê sc ) (4.100) as it is easier to see intuitively how to substitute in the quantities we have defined above. We will work out the details for the example when i = a and j = b and then

77 66 quote the results for the other antenna pairs. In this case we have R a ê sc = A 1 sin α cos θ sc + A 2 cos φ cos α cos θ sc A 1 cos α sin θ sc + A 2 cos φ sin α sin θ sc = A 1 (sin α cos θ sc cos α sin θ sc ) + A 2 cos φ(cos α cos θ sc + sin α sin θ sc ). (4.101) Similarly we have Rb ê sc = B 1 (sin α cos θ sc cos α sin θ sc ) + B 2 cos φ(cos α cos θ sc + sin α sin θ sc ). (4.102) Next we use the angle sum and difference trigonometric identities to rewrite the quantities in the parentheses above which gives us the following expressions: R a ê sc = A 1 sin(α θ sc ) + A 2 cos φ cos(α θ sc ) (4.103) R b ê sc = B 1 sin(α θ sc ) + B 2 cos φ cos(α θ sc ). (4.104) We may replace the angle α θ sc with Ψ as stated earlier. Using these new angles we may now write the quantity present in our data model as (Ra Rb ) : (ê sc ê sc ) = (Ra ê sc )(Rb ê sc ) = A 1 B 1 sin 2 (Ψ) + (A 1 B 2 + A 2 B 1 ) cos φ sin(ψ) cos(ψ) + A 2 B 2 cos 2 (Ψ). (4.105) With all the possible antenna pairs we have the vector (Ra Ra ) : (ê sc ê sc) A 2 1 sin2 (Ψ) + 2A 1 A 2 cos φ sin(ψ) cos(ψ) + A 2 2 cos2 (Ψ) (Ra Rb ) : (êsc êsc) A 1 B 1 sin 2 (Ψ) + (A 1 B 2 + A 2 B 1 ) cos φ sin(ψ) cos(ψ) + A 2 B 2 cos 2 (Ψ) (Rb =. R a ) : (êsc êsc) A 1 B 1 sin 2 (Ψ) + (A 1 B 2 + A 2 B 1 ) cos φ sin(ψ) cos(ψ) + A 2 B 2 cos 2 (Ψ) (Rb R b ) : (êsc êsc) B1 2 sin2 (Ψ) + 2B 1 B 2 cos φ sin(ψ) cos(ψ) + B2 2 cos2 (Ψ) Note that this is equivalent to AS(θ sc ) from equation (4.92). Now looking at the expression for σ(ψ), equation (4.94), we see that the RCS model simply takes the first term from each element of the vector given above. This corresponds again to neglecting the cross-term elements present in the scattered field, i.e. the fact that

78 67 Eh s = S hheh i + S hvev i and Ev s = S vh Eh i + S vvev i as we stated in section In the following sections we will see that our inclusion of these cross-terms aids in scattering vector reconstruction in the presence of noise and clutter. We now return to deriving our forward model Scattering Model for the Target Recall we had the expression for the data D(k, s) = e 2ik(Rx,s) A a,a x 2 a A a,a x a y a A a,a x a y a A a,a ya 2 cos 2 θ sc A a,b x a x b A a,b x a y b A a,b y a x b A a,b y a y b cos θ sc sin θ sc ρ A b,a x a x b A b,a y a x b A b,a x a y b A b,a y b y b sc (x) dx sin θ sc cos θ sc A b,b x 2 b A b,b x b y b A b,b x b y b A b,b yb 2 sin 2 θ sc (4.106) where A i,j = (1/16π 2 R a x,sr b x,s)(f sc (k R x,s ê s )) 2 F i (k R x,s ê i )F j (k R x,s ê j )P i (k) for i = a, b and j = a, b. The issue that remains to be dealt with in terms of linearizing is the fact that the argument of the radiation pattern of the scatterer, F sc, contains one of our unknowns, ê sc. In order to remove this nonlinearity we will make an assumption about the radiation pattern of the scatterer. However we will make different assumptions based on whether the scatterer is part of the extended target we seek to image or if it is a clutter scatterer present in the scene. In this section we focus on the scatterers that make up our target of interest. These linearizing assumptions also serve a second purpose. They will demonstrate the different type of scattering behavior we expect to see with an extended target versus a clutter scatterer. Recall that the specific type of target we wish to image is a line or curve, which can be thought of as an edge of many manmade objects. We have already mentioned that we expect a specific directional, or anisotropic, scattering response from our target. In particular, based on this target-type we assume that we will only obtain a strong return from the scatterer when the direction of the target is perpendicular to the look direction or the direction of propagation of the electromagnetic waves. Therefore we assume that F T is narrowly peaked around 0, that is, the main contributions to the data arise when R x,s ê sc 0. Note that in a bi-static system we would require that R i x,s ê T 0 for i = a, b. Also we will now change the subscript indicating the scatterer to the letter T to indicate we are considering a target scatterer. Using our directional

79 68 assumption we can simplify the expression (4.84) [ ( )] [ ( )] Rx,s Rx,s ê a ê T Rx,s Rx,s ê b ê T = ( ê T ê a )( ê T ê b ) = [ê a ê b ] [ê T ê T ] a 2 1 a 1 a 2 a 1 a 2 a 2 2 cos 2 θ a 1 b 1 a 1 b 2 a 2 b 1 a 2 b 2 cos θ sin θ = a 1 b 1 a 2 b 1 a 1 b 2 a 2 b 2 sin θ cos θ b 2 1 b 1 b 2 b 1 b 2 b 2 2 sin 2 θ (4.107) where we let ê a = (a 1, a 2, 0) and ê b = (b 1, b 2, 0). Note we have used the triple product, or BAC-CAB, identity like in (4.85) and the tensor notation like in (4.87). We can also say that F T (k R 1 if R x,s i x,s i ê T 0 ê T ) = (4.108) 0 otherwise for i = a, b. This eliminates the remaining nonlinearity in our forward model. Therefore we may express the data received from the target in the following form: D T (k, s) = e 2ik(Rx,s) A T (k, s, x)t (x)dx (4.109) where we define T (x) = ρ T (x)s(θ T ) as the target function. This quantity is the unknown we will reconstruct in our imaging scheme. Also we have defined the amplitude matrix A T : A a,a a 2 1 A a,a a 1 a 2 A a,a a 1 a 2 A a,a a 2 2 A T A a,b a 1 b 1 A a,b a 1 b 2 A a,b a 2 b 1 A a,b a 2 b 2 (k, s, x) = (4.110) A b,a a 1 b 1 A b,a a 2 b 1 A b,a a 1 b 2 A b,a a 2 b 2 A b,b b 2 1 A b,b b 1 b 2 A b,b b 1 b 2 A b,b b 2 2

80 69 where A a,a = F a (k R x,s ê a )F a (k R x,s ê a )P a (k) 16π 2 R a x,sr b x,s A a,b = F a (k R x,s ê a )F b (k R x,s ê b )P a (k) 16π 2 R a x,sr b x,s A b,a = F a (k R x,s ê a )F b (k R x,s ê b )P b (k) 16π 2 R a x,sr b x,s (4.111) (4.112) (4.113) A b,b = F b (k R x,s ê b )F b (k R x,s ê b )P b (k). (4.114) 16π 2 Rx,sR a x,s b We see clearly now that the amplitude matrix no longer depends on the unknown quantity ê T and therefore we obtain the following linear forward model for data obtain from scattering from the target: D T (k, s) = F T [T ](k, s) = e 2ik(Rx,s) A T (k, s, x)t (x)dx where F T is a linear forward operator. We observe the special case when our model coincides with that of the radar literature. Note that if we let ê a = [1, 0] and ê b = [0, 1] A T becomes diagonal, that is, A a,a A T 0 A a,b 0 0 (k, s, x) =. (4.115) 0 0 A b,a A b,b In this case the cross-terms would not be included in the forward model. This corresponds to assuming that one can reconstruct each element of the scattering vector S i,j from its corresponding data set D i,j for i = a, b and j = a, b. In the following sections we will compare reconstruction results using our data model and those created assuming that the amplitude matrix is diagonal as above.

81 Scattering Model for Clutter We can also take the general model and adapt it to the expected scattering model of our clutter, or the unwanted scatterers in our scene. In this case we replace the generic scatterer subscript with the letter C to indicate we are considering data received from the scattering of an object in the scene which is not part of our target. We assume our clutter scatters isotropically, since it is most likely not made up of edges like our target. This implies that F C (k R x,s ê C ) = 1 (4.116) k, s, x. This removes the nonlinearity from the forward model as we did for the target. We obtain the following forward model for clutter data: D C (k, s) = F C [C](k, s) = e 2ikRx,s A C (k, s, x)c(x)dx, (4.117) where we let the function that describes the clutter be C(x) = ρ C (x)s C (θ). Note ρ C (x) is the clutter scattering strength at x, and S C (θ) is the clutter scattering vector which depends on the orientation of the clutter dipole element at location x. Observe the amplitude matrix has the form A a,a x 2 a A a,a x a y a A a,a x a y a A a,a ya 2 A C A a,b x a x b A a,b x a y b A a,b y a x b A a,b y a y b (k, s, x) =, (4.118) A b,a x a x b A b,a y a x b A b,a x a y b A b,a y b y b A b,b x 2 b A b,b x b y b A b,b x b y b A b,b yb 2 where A i,j for i = a, b and j = a, b are defined as in equations (4.111, 4.112, 4.113, 4.114). Also recall that Ra = (x a, y a, z a ) and Rb = (x b, y b, z b ). Note that the operator F C is different from F T but it is still a linear operator. We will now combine all elements of the data into one expression and then move on to discuss the reconstruction, or imaging, process.

82 Total Forward Model We now can combine the target and clutter data with measurement, or thermal, noise n to obtain the full forward model expression. That is, we expect our collected data D to be of the form D(k, s) = F T [T ](k, s) + F C [C](k, s) + n(k, s) = D T (k, s) + D C (k, s) + n(k, s). (4.119) More specifically we have D(k, s) = e 2ik(Rx,s) A T (k, s, x)t (x)dx + e 2ikRx,s A C (k, s, x)c(x)dx + n(k, s), (4.120) where we assume n is a 4 1 vector. We also make the assumption now that our target vector T (x), our clutter vector C(x), and our noise vector n(k, s) are all second-order stochastic processes. A second-order stochastic process has finite variance, or each element of the covariance matrix is finite in this case. It is typical to make this assumption for clutter and noise. We choose to apply the same assumption for the target for the same reason that in Bayes estimation one assumes the parameter is a random variable or stochastic process. Since the location, shape, length, and all other descriptive quantities regarding the target are unknown to us prior to the data collection it is a standard techinique in statistics to make the assumption that this unknown is in fact stochastic in nature. This allows one to use statistical information regarding the object in the reconstruction or imaging task. We also note that although our target is relatively simple in nature as it is a curve, most radar targets are significantly more complicated in terms of their appearance. Even our simple object s RCS varies widely as one changes the angle of the observation. Therefore it seems like an acceptable modeling choice to assume that the object is random in some respect. We may assume that only its descriptive parameters are random, like its location and orientation, and assume that the standard form of the target function is known. Or one may assume that the target function lies in some space of possible functions where there exists a probability distribution defined on that space describing how likely it is that the target function coincides

83 72 with any given element of that space. We have already specified a somewhat rigid form for our target function, that is, T (x) = ρ T (x)s(θ T ). In this case the form of S is assumed and we assume that only the parameter θ T is stochastic. However we leave the form of the scattering strength unspecified and therefore we will eventually need to define a probability distribution describing the stochastic nature of ρ T (x). For now we do not assign specific distributions to these random quantities. In the our numerical experiments we will ascribe distributions and they will be discussed in detail in the following sections. We do however need to specify some statistical assumptions on T, C, and n. For the first-order statistics we have that E[T (x)] = µ(x) (4.121) E[C(x)] = 0 (4.122) E[n(k, s)] = 0, (4.123) where µ(x) = [E[T a,a (x)], E[T a,b (x)], E[T b,a (x)], E[T b,b (x)]] and above 0 is the 4 1 zero vector. We also specify the autocovariance matrices for T, C, and n where we define the (l, k)th entry as follows C T l,k(x, x ) = E[(T l (x) µ l (x))(t k (x ) µ k (x))] (4.124) R C l,k(x, x ) = E[C l (x)c k (x )] (4.125) S n l,k(k, s; k, s ) = E[n l (k, s)n k (k, s )]. (4.126) where here l = aa, ab, ba, bb and k = aa, ab, ba, bb. We assume the three processes are second-order, and we define all integrals involving the three processes in the mean-square sense. In addition we assume the Fourier transforms for C T, R C, and R n exist. We will consider two cases, in the first we assume that the target, clutter, and noise are mutually statistically independent. In the second case we will allow there to be a correlation between the target and clutter processes. Now that our forward model is fully derived and specified we will move on to derive our imaging scheme. We begin with a brief discussion on the backprojection

84 73 process in general in the polarimetric SAR case and then give the results for the two statistical cases described above. 4.5 Image Formation in the presence of noise and clutter In order to form an image of our target, we will use a filtered-backprojectionbased reconstruction method. Specifically we apply the backprojection operator K to our data to form an image I of our target, i.e. I(z) = (KD)(z) = e i2krz,s Q(z, s, k)d(k, s)dkds (4.127) where I(z) = [I a,a (z), I a,b (z), I b,a (z), I b,b (z)]. Plugging in the expression for D we have I(z) = [A e i2k(rz,s Rx,s) Q(z, s, k){ T (k, s, x)t (x) + A C (k, s, x)c(x) ] } dx + n(k, s) dkds, (4.128) where we define Q as a 4 4 filter matrix. The filter Q can be chosen in a variety of ways. One way was already described in chapter 2. This method attempts to provide an image-fidelity operator that most closely resembles a delta function. This method works well in the case when we assume our target function can be described deterministically. We will instead consider a statistical criterion for selecting the optimal filter Q. In particular we will attempt to minimize the mean-square error between the reconstructed image I and the actual target function T. This method seeks to minimize the effect of noise and clutter on the resulting image. This method was first described for the case of standard SAR in the work of Yazici et al [15]. We begin by first defining an error process I a,a (z) T a,a (z) I a,b (z) T a,b (z) E(z) = I(z) T (z) =. (4.129) I b,a (z) T b,a (z) I b,b (z) T b,b (z)

85 74 We also define the mean-square error as J (Q) = E[ E(z) 2 ]dz = E[(E(z) (E(z))]dz. (4.130) where E indicates we are taking the complex conjugate and the transpose of the vector E. Note that we have J (Q) = V(Q) + B(Q) (4.131) where V(Q) = B(Q) = E[ E(z) E[E(z)] 2 ]dz (4.132) E[I(z)] T (z) 2 dz. (4.133) Here V is known as the variance of the estimate and B is the bias. It is well-known that mean-square error is made up of variance and bias and that when we attempt to minimize such a quantity there is always a tradeoff between minimizing variance and bias. We will also see that minimizing this quantity will come at cost with respect to visible singularities of the target function. As discussed in [15], as we suppress the clutter and noise contributions to the image we also may be suppressing the strengths of these singularities which are key in identifying a target from an image. We will see though that our image-fidelity operator will again be of the form of a pseudodifferential operator and therefore the location and orientation of these singularities will be maintained Statistically Independent Case We will now consider the case when T, C, and n are all mutually statistically independent. We begin by stating the following theorem which summarizes the resulting optimal filter obtained in the case when we minimize mean-square error. Theorem. 1. Let D be given by (4.119) and let I be given by (4.128). Assume

86 75 S n is given by (4.126) and define S T, S C, and M as follows: C T (x, x ) = e ix ζ e ix ζ S T (ζ, ζ )dζdζ (4.134) R C (x, x ) = e ix ζ e ix ζ S C (ζ, ζ )dζdζ (4.135) µ(x)µ (x ) = e ix ζ e ix ζ M(ζ, ζ )dζdζ (4.136) then any filter Q satisfying a symbol estimate and also minimizing the leadingorder mean-square error J (Q) must be a solution of the following integral equation r and k: ( ) ηe ix (ζ ζ) [(QA T η χ Ω )(S T + M)(A T ) + (QA C η)s C (A C ) ]dζ 2. If we make the following stationarity assumptions: (r,k) ( ) + Q S n η = 0 (r,k) S T (ζ, ζ ) = S T (ζ)δ(ζ ζ ) (4.137) S C (ζ, ζ ) = S C (ζ)δ(ζ ζ ). (4.138) Then the filter Q minimizing the total error variance V(Q) is given by: ] Q [ η 2 (A T S T (A T ) + A C S C (A C ) ) + η S n = η χ Ω S T (A T ). (4.139) We also include in this theorem the case when we minimize simply the variance of the error process. This calculation leads to an algebraic expression for Q as opposed to the integral equation expression we obtain when we minimize meansquare error. Note that in this second result we also make additional stationarity assumptions on the target and clutter process. Proof. 1. Our goal is to minimize J (Q), which is given by the expression J (Q) = E[ (K(F T (T ) + F C (C) + n))(z) T (z) 2 ]dz. (4.140)

87 76 Because we have assumed that T, C, and n are mutually statistically independent, this mean-square error can be written with three terms, one dependent on each process present in the data. That is, J (Q) = J T (Q) + J C (Q) + J n (Q) (4.141) where J T (Q) = J C (Q) = J n (Q) = E[ (K(F T ) I Ω )(T )(z) 2 ]dz (4.142) E[ K(F C (C))(z) 2 ]dz (4.143) E[ K(n)(z) 2 ]dz. (4.144) Also note we have I Ω T (z) = χ Ω (z, ξ)e i(z z) ξ T (z )dξdz (4.145) where χ Ω (z, ξ) is a smoothed characteristic function to avoid ringing. The next step is to simplify the expression for J T (Q). We first rewrite the expression for K(F T T )(z). Recall we had K(F T T )(z) = e i2k(rz,s Rx,s) Q(z, s, k)a T (x, s, k)t (x)dxdkds. (4.146) From the method of stationary phase in the variables (s, k), we know that the the main contributions to the integral come from the critical points of the phase. We have assumed that only the critical point x = z is actually visible to the radar. In order to obtain a phase that resembles that of a delta function, namely i(x z) ξ, we expand the phase about the point x = z.

88 77 We utilize the following Taylor expansion formula as in chapter two: f(x) f(z) = 1 0 = (x z) d f(z + µ(x z))dµ dµ 1 0 f z+µ(x z) dµ = (x z) Ξ(x, z, s, k), (4.147) where in our case f(z) = 2kR z,s. We now perform the Stolt change of variables from (s, k) to ξ = Ξ(x, z, s, k). Therefore we have K(F T T )(z) = e i(x z) ξ Q(z, s(ξ), k(ξ))a T (x, s(ξ), k(ξ))t (x)η(x, z, ξ)dxdξ, (4.148) where η is the Jacobian resulting from the change of variables, sometimes called the Beylkin determinant. We can now substitute (4.148) into our expression for J T (Q), (4.157). We have J T (Q) = E { } e (x z) ξ Q(z, ξ)a T (x, ξ)η(x, z, ξ) χ Ω (z, ξ) T (x)dξdx 2 dz (4.149) We note that (4.149) involves terms of the form = 2 e i(x z) ξ Ã(z, x, ξ)dξ T (x)dx (4.150) L 2 A standard result in the theory of pseudodifferential operators [50] tells us that each term in the integral expression in (4.150) can be written AT (x) := e i(x z) ξ Ã(z, x, ξ)dξ T (x)dx = e i(x z) ξ p(ξ, z)dξ T (x)dx (4.151)

89 78 where p(ξ, z) = e iz ξ A(e iz ξ ). The symbol p has an asymptotic expansion p(ξ, z) α 0 i α α! Dα α ξ DxÃ(z, x, ξ) (4.152) z=x where α is a multi-index. In other words, the leading-order term of p(ξ, z) is simply Ã(z, z, ξ). The expression (4.150) can be written = AT, AT = A AT, T, (4.153) where, denotes the L 2 inner product. The symbol calculus for pseudodifferential operators [50] tells us that the leading-order term of the composition A A is A AT = e i(x z) ξ p (ξ, z)p(ξ, z)dξ T (x)dx. (4.154) This implies that the leading-order contribution to (4.149) is J T (Q) [ E { e i(x x ) ξ T (x) Q(x, ξ)a T (x, ξ)η(x, x, ξ) χ Ω (x, ξ) { } ] Q(x, ξ)a T (x, ξ)η(x, x, ξ) χ Ω (x, ξ) T (x ) dξdxdx. } (4.155) Our next task is to simplify the expression inside the expectation. We note that in the argument of the expectation we have a quantity that may be expressed as T (x)h HT (x ) where T is a 4 1 vector and H and H are two 4 4 matrices. If we write out the matrix and vector multiplication using

90 79 summations altogether we have E[T (x)h HT (x )] = E[ = = 4 l=1 4 l=1 4 l=1 4 4 p=1 r=1 4 4 p=1 r=1 4 p=1 r=1 H l,p H p,r T l (x)t r(x )] (4.156) H l,p H p,r E[T l (x)t r(x )] 4 H H l,p p,r (Cr,l(x T, x) + µ r (x )µ l (x)) = tr(h ( HC T (x, x))) + tr(h ( Hµ(x, x))) where recall C T is the covariance matrix of the target and µ(x) is the mean of the target. Also note we define the following parameter µ(x, x ) = µ(x)µ (x ) which is a 4 4 matrix as well. We therefore have that J T (Q) J T (Q) + B(Q) [ { } e i(x x ) ξ tr Q(x, ξ)a T (x, ξ)η(x, x, ξ) χ Ω (x, ξ) { } ] Q(x, ξ)a T (x, ξ)η(x, x, ξ) χ Ω (x, ξ) C T (x, x) dξdxdx [ { } + e i(x x ) ξ tr Q(x, ξ)a T (x, ξ)η(x, x, ξ) χ Ω (x, ξ) { } ] Q(x, ξ)a T (x, ξ)η(x, x, ξ) χ Ω (x, ξ) µ(x, x) dξdxdx. (4.157) Here we see explicitly the bias term of the mean-square error. The rest of the terms make up the variance portion. We can repeat the same steps for the clutter term J C (Q) (i.e. write KF in terms of ξ, express J C in terms of the composition of pseudodifferential

91 80 operators, and use the symbol calculus) to obtain the leading-order expression J C (Q) [ {Q(x e i(x x ) ξ tr, ξ)a C (x, ξ)η(x, x, ξ) } ] dξdxdx. (4.158) { Q(x, ξ)a C (x, ξ)η(x, x, ξ) } R C (x, x) The last term we need to simplify is the noise term, i.e. J n (Q). We write it out explicitly as (Kn)(z) = e i2krz,s Q(z, s, k)n(s, k)dkds, (4.159) which implies that J n (Q) = [ E e i2krz,s n (s, k)q (z, s, k) e i2k R z,s Q(z, s, k )n(s, k )dkdsdk ds ]dz. (4.160) We rewrite the matrix and vector multiplication as before to obtain J n (Q) = [ ] e i2(krz,s k R ) z,s tr Q (z, s, k)q(z, s, k )S n (s, k ; s, k) dkdsdk ds dz (4.161) where S n is the covariance matrix of the noise. We denote it by the letter S because the noise is already written in terms of a frequency variable k and is therefore analogous to a spectral density function. In order to simplify our expression so that we may add it to the target and clutter terms we make the assumption that the noise is stationary in both s and k. This is equivalent to assuming that the noise has been prewhitened. Mathematically this assumption is written as S n i,j(s, k; s, k ) = S n i,j(s, k)δ(s s )δ(k k ) (4.162)

92 81 Inserting this specific S n into equation (4.161) we obtain J n (Q) = [ ] tr Q (z, s, k)q(z, s, k) S n (s, k) dkdsdz (4.163) where without loss of generality we replace s, k with s, k. Our last step is to perform the Stolt change of variables from (s, k) to ξ to obtain, J n (Q) = [ ] tr Q (z, ξ)q(z, ξ) S n (ξ) η(z, z, ξ)dξdz. (4.164) We rewrite our expression for J T and J C in terms of the spatial frequency variable now in order to eventually be able to combine these terms with the noise terms. Our goal is to have the same integrations present in all three terms. We define the following spectral density functions as they were defined in the statement of the theorem: C T (x, x ) = e ix ζ e ix ζ S T (ζ, ζ )dζdζ (4.165) R C (x, x ) = e ix ζ e ix ζ S C (ζ, ζ )dζdζ. Note these are both 4 4 matrices as well. Switching the two arguments of the covariance matrices gives us the following expressions C T (x, x) = e ix ζ e ix ζ S T (ζ, ζ )dζdζ (4.166) R C (x, x) = e ix ζ e ix ζ S C (ζ, ζ )dζdζ. We find these specifically because we require these two expressions in order to rewrite the mean-square error in terms of ζ and ζ. In particular we insert these into the equations for J T (Q) and J C (Q). We have J T (Q) [ { e i(x x ) ξ e i(x ζ x ζ ) } tr Q(x, ξ)a T (x, ξ)η(x, x, ξ) χ Ω (x, ξ) { } ] Q(x, ξ)a T (x, ξ)η(x, x, ξ) χ Ω (x, ξ) S T (ζ, ζ ) dξdxdx dζdζ (4.167)

93 82 and J C (Q) [ e i(x x ) ξ e i(x ζ x ζ ) {Q(x tr, ξ)a C (x, ξ)η(x, x, ξ) } ] dξdxdx dζdζ. (4.168) { Q(x, ξ)a C (x, ξ)η(x, x, ξ) } S C (ζ, ζ ) We now use (4.151) and the symbol calculus again to carry out the integrations in the variables x and ξ. We obtain the leading order contributions to J T and J C as J T (Q) [ { } e ix (ζ ζ) tr Q(x, ζ )A T (x, ζ )η(x, x, ζ ) χ Ω (x, ζ ) ] { } Q(x, ζ )A T (x, ζ )η(x, x, ζ ) χ Ω (x, ζ ) S T (ζ, ζ ) dxdζdζ (4.169) and J C (Q) [ {Q(x, e ix (ζ ζ) tr ζ )A C (x, ζ )η(x, x, ζ ) } ] dxdζdζ. (4.170) { Q(x, ζ )A C (x, ζ )η(x, x, ζ ) } S C (ζ, ζ ) Now for the bias term, B(Q), we introduce the function M, which is analogous to a spectral density function and is defined as follows: µ(x, x ) = µ(x)µ (x ) = e ix ζ e ix ζ M(ζ, ζ )dζdζ. (4.171) Again switching the two arguments we obtain µ(x, x) = e ix ζ e ix ζ M(ζ, ζ )dζdζ. (4.172) Now we substitute (4.172) into the expression for B(Q) and perform the same

94 83 stationary phase calculation in x and ξ to arrive at B(Q) [ { } e ix (ζ ζ) tr Q(x, ζ )A T (x, ζ )η(x, x, ζ ) χ Ω (x, ζ ) ] { } Q(x, ζ )A T (x, ζ )η(x, x, ζ ) χ Ω (x, ζ ) M(ζ, ζ ) dxdζdζ. (4.173) We have now finished simplifying the terms that make up the mean-square error. Our next step is to return to task of finding the optimal filter Q. Recall our goal is to find the Q which minimizes J (Q). We do this by finding the variation of J with respect to Q. That is, we look for the Q which satisfies 0 = d dɛ JT (Q + ɛq ɛ ) + d ɛ=0 dɛ J C (Q + ɛq ɛ ) + d ɛ=0 dɛ J n (Q + ɛq ɛ ) ɛ=0 + d dɛ B(Q + ɛq ɛ ) (4.174) ɛ=0 for all possible Q ɛ. This variational optimization technique comes from calculus of variations and is analogous to the Euler-Lagrange method. We use such a method because our quantity J (Q) is a functional and not a function. We now begin calculating this derivative. We focus on the first term on the right-hand side of (4.174) and then apply similar steps to obtain the other terms in the derivative. We have [ ] d dɛ JT (Q + ɛq ɛ ) = e ix (ζ ζ) tr (Q ɛ A T η) ((QA T η χ Ω )S T ) dxdζdζ ɛ=0 [ ] + e ix (ζ ζ) tr (QA T η χ Ω ) ((Q ɛ A T η)s T ) dxdζdζ. (4.175) Now if we interchange ζ and ζ in the second integral and use the fact that

95 84 S T (ζ, ζ ) = S T (ζ, ζ) we obtain d dɛ JT (Q + ɛq ɛ ) = ɛ=0 + e ix (ζ ζ ) tr [ e ix (ζ ζ) tr ] (Q ɛ A T η) ((QA T η χ Ω )S T ) dxdζdζ ] dxdζdζ. [ (QA T η χ Ω ) ((Q ɛ A T η)s T (ζ, ζ )) (4.176) Now using the fact that for any square matrix M, tr(m) = tr(m ) (where the superscript here refers to transpose) we have [ ] d dɛ JT (Q + ɛq ɛ ) = e ix (ζ ζ) tr (Q ɛ A T η) ((QA T η χ Ω )S T ) dxdζdζ ɛ=0 [ ] + e ix (ζ ζ ) tr S T (ζ, ζ )(Q ɛ A T η) (QA T η χ Ω ) dxdζdζ. (4.177) And finally we use the fact that for any square matrices A, B, and C, tr(abc) = tr(bca) to obtain [ ] d dɛ JT (Q + ɛq ɛ ) = e ix (ζ ζ) tr (Q ɛ A T η) ((QA T η χ Ω )S T ) dxdζdζ ɛ=0 [ ] + e ix (ζ ζ ) tr (Q ɛ A T η) (QA T η χ Ω )S T (ζ, ζ ) dxdζdζ. (4.178) We notice that the second term is exactly the complex conjugate of the first term. This leads us to the following expression: d dɛ JT (Q + ɛq ɛ ) = 2 Re ɛ=0 [ ] e ix (ζ ζ) tr (Q ɛ A T η) ((QA T η χ Ω )S T ) dxdζdζ. (4.179) Performing similar steps we obtain the expressions for the variational deriva-

96 85 tives of J C, J n, and B: d dɛ J C (Q + ɛq ɛ ) = 2 Re ɛ=0 d dɛ J n (Q + ɛq ɛ ) = 2 Re ɛ=0 d dɛ B(Q + ɛq ɛ ) = 2 Re ɛ=0 [ ] e ix (ζ ζ) tr (Q ɛ A C η) ((QA C η)s C ) dxdζdζ, ] tr [Q ɛ Q S n ηdxdζ, [ ] e ix (ζ ζ) tr (Q ɛ A T η) ((QA T η χ Ω )M) dxdζdζ. (4.180) Now inserting the above results into equation (4.174) we have 0 = 2 Re + 2 Re + 2 Re + 2 Re [ ] e ix (ζ ζ) tr (Q ɛ A T η) ((QA T η χ Ω )S T ) dxdζdζ (4.181) [ ] e ix (ζ ζ) tr (Q ɛ A C η) ((QA C η)s C ) dxdζdζ [ ] tr Q ɛ Q Sn ηdxdζ [ ] e ix (ζ ζ) tr (Q ɛ A T η) ((QA T η χ Ω )M) dxdζdζ. We combine the four terms to obtain 0 = 2 Re + 2 Re + 2 Re [ ] e ix (ζ ζ) tr (Q ɛ A T η) ((QA T η χ Ω )(S T + M)) dxdζdζ [ ] e ix (ζ ζ) tr (Q ɛ A C η) ((QA C η)s C ) dxdζdζ tr [ ] Q ɛ Q Sn ηdxdζ (4.182) Now in the first two terms we use the fact that for any square matrices A, B,

97 86 C, and D we have tr(abcd) = tr(bcda). This allows us to write 0 = 2 Re + 2 Re + 2 Re = 2 Re ] tr [Q ɛ (ηeix (ζ ζ) )(QA T η χ Ω )(S T + M)(A T ) dxdζdζ ] tr [Q ɛ (ηeix (ζ ζ) )(QA C η)(s C )(A C ) dxdζdζ ] tr [Q ɛ Q S n dxdζ [ { tr Q ɛ ηe ix (ζ ζ) [(QA T η χ Ω )(S T + M)(A T ) }] + (QA C η)(s C )(A C ) ] dxdζdζ + 2 Re ] tr [Q ɛ Q S n dxdζ. (4.183) In order to derive the condition which guarantees our equation equals zero we write out the trace operator in summation form: 0 = 2 Re 4 4 k=1 r=1 Q ɛ,(k,r) + (QA C η)(s C )(A C ) ] (r,k) { [ ηe ix (ζ ζ) (QA T η χ Ω )(S T + M)(A T ) dζ + (Q S n η) (r,k) }dxdζ. (4.184) We see that (4.184) holds for all Q ɛ if Q satisfies the following integral equation ( 0 = + (Q S n η) (r,k), ) ηe ix (ζ ζ) [(QA T η χ Ω )(S T + M)(A T ) + (QA C η)s C (A C ) ]dζ (r,k) (4.185) r and k. This completes the proof of part (1) of the theorem. We now consider part (2). 2. We now consider the task of minimizing the variance of the error process.

98 87 Ultimately we decide to perform this alternate task because it results in an algebraic expression for Q which aids in numerical calculations. We begin with the leading order contribution to V(Q) (the variance of the error term) which is given by V(Q) = J T (Q) + J C (Q) + J n (Q). (4.186) Again note the only term missing is the bias term B(Q). Following the above calculations we have that r and k ( 0 = + (Q S n η) (r,k). ) ηe ix (ζ ζ) [(QA T η χ Ω )S T (A T ) + (QA C η)s C (A C ) ]dζ (r,k) (4.187) We can make a stationarity assumption on (T µ) and C such that S T (ζ, ζ ) = S T (ζ)δ(ζ ζ ) (4.188) S C (ζ, ζ ) = S C (ζ)δ(ζ ζ ). (4.189) With these assumptions our condition simplifies to become ] 0 = η [((QA T η χ Ω )S T (A T ) + (QA C η)s C (A C ) + (Q S n η) (r,k) (4.190) (r,k) r and k. We may rewrite this as ] Q [ η 2 (A T S T (A T ) + A C S C (A C ) ) + η S n = η χ Ω S T (A T ). (4.191) Then we take the adjoint of the above equation to obtain [ η 2 (A T (S T ) (A T ) + A C (S C ) (A C ) ) + η( S n ) ] Q = ηa T (S T ) χ Ω (4.192)

99 88 Therefore if the matrix in brackets is invertible we obtain the following filter Q = [ η 2 (A T (S T ) (A T ) +A C (S C ) (A C ) )+η( S n ) ] 1 ηa T (S T ) χ Ω. (4.193) This completes the proof of part (2) Correlated Clutter and Target Case We now consider the case when the clutter process and the target process are statistically dependent. Recall our forward model D(k, s) = e [ 2ikRx,s A T (x, k, s)t (x) + A C (x, k, s)c(x) ] dx + n(k, s). (4.194) Also recall our statistical assumptions; the first-order statistics remain the same: E[T (x)] = µ(x) (4.195) E[C(x)] = 0 (4.196) E[n(k, s)] = 0. (4.197) For the second-order statistics we again have the autocovariance matrices for T, C, and n where we define the (l, k)th entry as follows C T l,k(x, x ) = E[(T l (x) µ l (x))(t k (x ) µ k (x))] (4.198) R C l,k(x, x ) = E[C l (x)c k (x )] (4.199) R n l,k(k, s; k, s ) = E[n l (k, s)n k (k, s )]. (4.200) We assume that T and n are statistically independent and also that C and n are statistically independent as before. However, we now assume that T and C are statistically dependent with the following cross-covariance matrices. We define the

100 89 (l, k)th entries of the cross-covariance matrices, C T,C and C C,T as Theorem. C T,C l,k (x, x ) = E[(T l (x) µ l (x))c k (x )] = E[T l (x)c k (x )] (4.201) C C,T l,k (x, x ) = E[C l (x)(t k (x ) µ k (x ))] = E[C l (x)t k (x )]. (4.202) 1. Let D be given by (4.119) and let I be given by (4.128). Assume S n is given by (4.126) and define S T, S C, M, S T,C, and S C,T as follows: C T (x, x ) = R C (x, x ) = µ(x)µ (x ) = C T,C (x, x ) = C C,T (x, x ) = e ix ζ e ix ζ S T (ζ, ζ )dζdζ (4.203) e ix ζ e ix ζ S C (ζ, ζ )dζdζ (4.204) e ix ζ e ix ζ M(ζ, ζ )dζdζ (4.205) e ix ζ e ix ζ S T,C (ζ, ζ )dζdζ (4.206) e ix ζ e ix ζ S C,T (ζ, ζ )dζdζ (4.207) then any filter Q satisfying a symbol estimate and also minimizing the meansquare error J (Q) must be a solution of the following integral equation r and k: { [ 0 = ηe ix (ζ ζ) (QA T η χ Ω )[(S T + M)(A T ) + S C,T (A C ) ] ] } + (QA C η)[s C (A C ) + S T,C (A T ) dζ } + {ηqs n (r,k) (r,k) 2. If we make the stationarity assumptions S T (ζ, ζ ) = S T (ζ)δ(ζ ζ ) (4.208) S C (ζ, ζ ) = S C (ζ)δ(ζ ζ ) (4.209) S T,C (ζ, ζ ) = S T,C (ζ)δ(ζ ζ ) (4.210) S C,T (ζ, ζ ) = S C,T (ζ)δ(ζ ζ ), (4.211)

101 90 Proof. then the filter Q minimizing the total error variance V(Q) is given by: ) } ] 1 Q = [ η {(A 2 T (S T ) + A C (S C,T ) (A T ) + (A C (S C ) + A T (S T,C ) )(A C ) + ηr n ) η (A T (S T ) + A C (S C,T ) χ. (4.212) 1. We begin again by defining the error process and minimizing the meansquare error with respect to the filter Q. Recall the error process E(z) = I(z) T (z) and the functional describing the mean-square error J (Q) given by J (Q) = E[ E(z) 2 ]dz = = J T (Q) + J C (Q) + J n (Q) + J C,T (Q). E[ (K(F T (T ) + F C (C) + n))(z) T (z) 2 ]dz (4.213) Note that because of the correlation of the target and clutter processes we have the additional cross-term J C,T. The derivations for the first three terms remain unchanged so for now we focus on simplifying the cross-term. Explicitly J C,T is given by J C,T (Q) = 2 Re [ E { e i(x z) ξ C (x) Q(z, ξ)a C (x, ξ)η(x, z, ξ)} dξdx ] }T (x )dξ dx dz. e i(x z) ξ {Q(z, ξ )A T (x, ξ )η(x, z, ξ ) χ Ω (z, ξ ) (4.214) Following the same outline of steps in the proof of the statistically independent result, we now use (4.151) to carry out the integrations in z and ξ. We then

102 91 find that the leading-order contribution to the cross-term can be written J C,T (Q) 2 Re { [ E { } e i(x x ) ξ C (x) Q(x, ξ)a C (x, ξ)η(x, x, ξ) } ] T (x ) dξdxdx. Q(x, ξ)a T (x, ξ)η(x, x, ξ) χ Ω (x, ξ) (4.215) Now if we write out the matrix multiplication in summation form and carry the expectation through to the random elements T and C, we obtain the following expression J C,T (Q) 2 Re [{ e i(x x ) ξ tr Q(x, ξ)a C (x, ξ)η(x, x, ξ) { } ] Q(x, ξ)a T (x, ξ)η(x, x, ξ) χ Ω (x, ξ) C C,T (x, x ) dξdxdx. (4.216) } We then define the cross-spectral density matrix S C,T (ζ, ζ ) and S T,C (ζ, ζ ) as follows C C,T (x, x ) = e ix ζ e ix ζ S C,T (ζ, ζ )dζdζ (4.217) C T,C (x, x ) = e ix ζ e ix ζ S T,C (ζ, ζ )dζdζ. (4.218) In order to write this term in terms of ζ and ζ we insert (4.217) into the expression for J C,T to obtain the expression J C,T (Q) 2 Re [{ e i(x x ) ξ e i(x ζ x ζ ) tr Q(x, ξ)a C (x, ξ)η(x, x, ξ) { } ] Q(x, ξ)a T (x, ξ)η(x, x, ξ) χ Ω (x, ξ) S C,T (ζ, ζ ) dξdxdx dζdζ. } (4.219) Note we define S T,C for use in the simplification of the variational derivative. Again we use (4.151) and the symbol calculus to obtain the leading-order

103 92 contribution from the integrations in x and ξ: J C,T (Q) 2 Re [{ e ix (ζ ζ) tr Q(x, ζ)a C (x, ζ)η(x, x, ζ) { } ] Q(x, ζ)a T (x, ζ)η(x, x, ζ) χ Ω (x, ζ) S C,T (ζ, ζ ) dxdζdζ. (4.220) } Our next step is to find the variational derivative of J C,T with respect to Q and rewrite it in such a way that we can combine it easily with the terms previously derived. We first note that the variational derivative can be written as d dɛ (J C,T (Q + ɛq ɛ )) = 2 Re ɛ=0 +2 Re e ix (ζ ζ) tr [ ] e ix (ζ ζ) tr (QA C η) Q ɛ A T ηs C,T (ζ, ζ ) dxdζdζ [ ] η(a C ) Q ɛ (QAT η χ Ω )S C,T (ζ, ζ ) dxdζdζ. (4.221) In the first term we interchange ζ and ζ and use the fact that S T,C (ζ, ζ ) = S C,T (ζ, ζ) to obtain d dɛ (J C,T (Q + ɛq ɛ )) = 2 Re ɛ=0 +2 Re e ix (ζ ζ) tr [ ] e ix (ζ ζ ) tr (QA C η) Q ɛ A T ηs T,C (ζ, ζ ) dxdζdζ [ ] η(a C ) Q ɛ (QAT η χ Ω )S C,T (ζ, ζ ) dxdζdζ. (4.222) We then use the fact that for any square matrix A we have that tr(a ) = tr(a) (where indicates transpose) in the first term to obtain d dɛ (J C,T (Q + ɛq ɛ )) = 2 Re ɛ=0 +2 Re e ix (ζ ζ) tr [ ] e ix (ζ ζ ) tr S T,C (Q ɛ A T η) (QA C η) dxdζdζ [ ] η(a C ) Q ɛ (QAT η χ Ω )S C,T (ζ, ζ ) dxdζdζ. (4.223) Next we use the fact that for any square matrices A, B, and C that tr(abc) =

104 93 tr(bca) in the first term. This gives us d dɛ (J C,T (Q + ɛq ɛ )) = 2 Re ɛ=0 +2 Re e ix (ζ ζ) tr ] e ix (ζ ζ ) tr [(Q ɛ A T η) (QA C η)s T,C dxdζdζ ] [ η(a C ) Q ɛ (QAT η χ Ω )S C,T (ζ, ζ ) dxdζdζ. (4.224) Also use the fact that for any complex number z, Re(z) = Re(z), in the first term to find d dɛ (J C,T (Q + ɛq ɛ )) = 2 Re ɛ=0 +2 Re e ix (ζ ζ) tr ] e ix (ζ ζ ) tr [(Q ɛ A T η) (QA C η)s T,C dxdζdζ ] [ η(a C ) Q ɛ (QAT η χ Ω )S C,T (ζ, ζ ) dxdζdζ. (4.225) Finally in both terms we use the face that for any square matrices A, B, C, and D tr(abcd) = tr(bcda) to write d dɛ (J C,T (Q + ɛq ɛ )) = 2 Re ɛ=0 +2 Re e ix (ζ ζ) tr ] e ix (ζ ζ) tr [Q ɛ η(qac η)s T,C (A T ) dxdζdζ [ Q ɛ η(qat η χ Ω )S C,T (A C ) dxdζdζ. (4.226) Combining with the other terms of the MSE we find that the variational derivative of the MSE with respect to Q is given by 0 = d dɛ (J (Q + ɛq ɛ )) ɛ=0 [ { = 2 Re tr Q ɛ ηe ix (ζ ζ) [ (QA T η χ Ω )[(S T + M)(A T ) + S C,T (A C ) ] + (QA C η)[s C (A C ) + S T,C (A T ) ] dζ + ηqr n }]dxdζ. (4.227) This expression holds for all Q ɛ if Q satisfies the following integral equation

105 94 for all r and for all k { 0 = [ ηe ix (ζ ζ) (QA T η χ Ω )[(S T + M)(A T ) + S C,T (A C ) ] + (QA C η)[s C (A C ) + S T,C (A T ) ] dζ } (r,k) } + {ηqr n (r,k) (4.228) This completes the proof of part (1). 2. Now if we consider minimizing simply the variance as before, we obtain the following integral equation for Q { 0 = [ ηe ix (ζ ζ) (QA T η χ Ω )[S T (A T ) + S C,T (A C ) ] + (QA C η)[s C (A C ) + S T,C (A T ) ] dζ } (r,k) } + {ηqr n (r,k) (4.229) We may also make the following stationarity and joint stationarity assumptions S T (ζ, ζ ) = S T (ζ)δ(ζ ζ ) (4.230) S C (ζ, ζ ) = S C (ζ)δ(ζ ζ ) (4.231) S T,C (ζ, ζ ) = S T,C (ζ)δ(ζ ζ ) (4.232) S C,T (ζ, ζ ) = S C,T (ζ)δ(ζ ζ ). (4.233) This leads to the following algebraic expression for Q { } 0 = η (QA T η χ Ω )[S T (A T ) + S C,T (A C ) ] + (QA C η)[s C (A C ) + S T,C (A T ) ] + QR n η. (4.234) We may rearrange the equation and take the transpose as in the last step of

106 95 the filter derivation in the independent case to obtain ) Q = [ η {(A 2 T (S T ) + A C (S C,T ) (A T ) + ) η (A T (S T ) + A C (S C,T ) χ. which completes the proof of the theorem. } ] 1 (A C (S C ) + A T (S T,C ) )(A C ) + ηr n (4.235) 4.6 Numerical Simulations We conclude our study of SAR imaging of extended targets with some numerical experiments to verify the theory proved in the previous section. Recall we indicated that the most significant difference between our polarimetric SAR processing and the type of processing in practice is that we utilize every set of polarimetric data to reconstruct each element of the scattering matrix. This amounts to the fact that the optimal filter Q is a fully dense filter matrix. In standard SAR processing one would make the assumption that Q is a diagonal matrix. That is, the element T i,j (x) may be reconstructed from the data set D i,j (k, s) for each i = a, b and j = a, b. We saw this assumption presenting itself in the forward model in our comparison to the standard RCS model for cylindrical extended targets. We have calculated the optimal filter and allowed it to be in general fully dense. We consider a simple thought experiment that shows even under the simplest assumptions on the target and clutter processes the optimal filter in the mean-square sense is not diagonal. Observe that for the remainder of the chapter we will consider specifically the case when target and clutter are not statistically correlated and we will look only at the optimal filter for the case of minimizing variance as in (4.139). This choice is made solely for simplicity in terms of the numerical calculations necessary. Now note that if the spectral density matrices S T, S C, and S n are diagonal,

107 96 we obtain a diagonal filter Q where the entries along the diagonal simplify to Q i,i = ηa T (i,i)s T (i,i) χ Ω,(i,i). (4.236) η 2 ( A T (i,i) 2 S T (i,i) + A C (i,i) 2 S C (i,i)) + ηs n (i,i) This corresponds to performing a component-by-component backprojection where the filter is derived to minimize the mean-square error in each component of the image with respect to each component of the actual target function. This result is derived directly in [15]. This is an example of a backprojection filter that can be used in standard polarimetric SAR imaging. It is not necessarily the case that these spectral density matrices are diagonal. We may assume that the noise has been whitened and therefore S n is diagonal. It is not as simple to reason that the target and clutter spectral densities have such a structure. To demonstrate this we calculate example covariance matrices for specific target and clutter functions. For our target example we begin by assuming all randomness in T (x) arises in the scattering vector S(θ T ). Therefore for simplicity we assume that ρ T (x) is deterministic. In particular we assume that the orientation θ T is a random process dependent on the location x. For simplicity we assume θ T is a Gaussian random process where for any θ T (x) and θ T (x ) the joint probability density function has the form f(θ T, θ T ) = 1 [ 2π 3 exp 1 ] 3 (2θ2 T 2θ T θ T 10θ T 10θ T + 2θ 2 T + 50) (4.237) where θ T = θ T (x) and θ T = θ T (x ). This corresponds to a joint Gaussian density with mean µ = [5, 5] T and covariance matrix C = We note that this implies the marginal density function for any θ T is given by f(θ T ) = [ 2 4 π exp θ2 T 2 + 5θ T 25 ]. (4.238) 2

108 97 Under these assumptions we obtain the following covariance matrix for T (x) C T (x, x ) = C T (θ T, θ T ) = ρ T (x)ρ T (x ) (4.239) where we have assumed for simplicity that T ab (x) = T ba (x). This assumption reduces the dimensions of our original covariance matrix to a 3 3 matrix. Note we calculate the covariance matrix by averaging over θ T and θ T. We also calculated the result for when µ = [0, 0] T with C the same as above. In this case our covariance structure was slightly simpler, however still not diagonal. We have C T (x, x ) = C T (θ T, θ T ) = ρ T (x)ρ T (x ) (4.240) Also for the sake of completeness we found the covariance structure within each pixel, that is the case when x = x. The result is given by C T (x, x) = ρ T (x) (4.241) We see that even a Gaussian random target process does not result in a diagonal C T and therefore S T will also not be diagonal. In addition, we consider an example clutter function, where again we assume all the randomness arises due to the random process θ C (x). In this case we assume that each realization θ C (x) is uniformly distributed in the interval [0, π/2]. This assumption has been previously used for clutter in [21]. We also assumed that any two realizations θ C (x) and θ C (x ) were independent. Therefore the covariance

109 98 matrix for C(x) for any two x and x is given by π 0.25 C C (x, x ) = ρ C (x)ρ C (x ) 0.25π π 2 0.5π 1. (4.242) π Here we have used C C to denote the covariance as opposed to R C because in this case the mean of the clutter function is not zero as assumed before. It is clear that even with rather simple statistical assumptions our covariance matrices for our target and clutter functions, and hence the spectral density functions, are not diagonal. We can therefore say with certainty that our resultant filter Q defined in (4.193) is different from the filter in the component-by-component backprojection from (4.236). Our derived filter implies that we utilize all the components of the data when creating each component of the image, that is each D i,j is used to create the image I i,j for i = a, b and j = a, b. It is also important to observe that these statistical assumptions are rather simple and introduce randomness in one specific way. We may also assume, for example, that the scattering strengths ρ T and ρ C are random processes dependent on the location x. This would introduce complexity in the joint density functions, and complicate the covariance matrix calculation perhaps significantly. We stress that the assumptions made to arrive at the above example matrices are simple ones and do not reflect a very realistic target and clutter scene. We find that even in this simple case, these covariance matrices and hence the spectral density matrices are not diagonal. Therefore our derived filter (4.193) is indeed different and novel in comparison with a component-by-component backprojection scheme in which one treats each set of data independently to form the four images that make up our I(x). This component-by-component scheme may be the one defined in (4.236), or an even simpler backprojection scheme in which the statistics of the target and clutter scene are not taken into account.

110 Numerical Experiments We move on now to discuss the specific numerical experiments performed to verify our theory. We assume that our scene on the ground is of the size 50 meters by 50 meters where there exist 100 pixels by 100 pixels. That is, our resolution cell size is.5 meters by.5 meters. The coordinate system used is target-centered so the target is always located at the origin of the scene. We consider targets with varying orientation for example θ T (x) = 0, π/4, π/2 at each x location that the target is found. We also have that ρ T (x) = 1 for all target locations x. Note we assume the target is always twenty pixels in length and one pixel in width. For the clutter process we assume that a clutter dipole is located at every possible x in the scene of interest. Also note that all the random variables ρ C (x) are independent identically distributed (i.i.d.) Gaussian random variables with zero-mean and unit complex variance. The random variables θ C (x) are i.i.d uniform between the angles [0, π/2]. We note that in this case the total clutter process C(x) is wide-sense stationary and therefore the stationarity assumption (4.138) holds. Also observe that measurement noise is not explicitly included in the numerical simulations as it is simple to assume that the data has been prewhitened. The flight path is always assumed to be linear with the coordinates given by γ(s) = [x 0, s, z 0 ], where we have assumed that x 0 and z 0 are fixed or constant. The two antennas used for transmission and reception have orientations ê a = [1, 0, 0] and ê b = [0, 1, 0] which are defined with respect to the origin in the scene on the ground. Note we may think of a as having the horizontal or H orientation and b therefore has the vertical or V orientation. Our frequency range is GHz, where we sample at a rate above Nyquist. We also note the way in which we calculate the spectral density functions of the processes T (x) and C(x). Since the target is not actually random in the experiments we calculate its spectral density via the formula S T (ζ) = 2 e ix ζ T (x)dx. (4.243) The clutter covariance matrix was calculated by hand given the simple assumptions

111 100 on its distribution. That is, we average over C(x) and C(x ). Then we take the Fourier transform in order to calculate S C (ζ). We also note our definition of signalto-clutter ratio (SCR) is given by SCR = 20 log 1 N N 1 (T (x i) µ T (x i ) 2 E[ C 2 ] (4.244) where N is the number of grid points and µ T is the mean of the target function. Note that in producing the data the directional scattering assumptions on the target and clutter process, (4.108) and (4.116), are not used. In this way we avoid making any crimes of inversion. After the simulated data is produced we image the target scattering vector, or target function, T (x) using both the standard SAR processing and our coupled polarimetric processing. We will then compare two sets of images for each example. We have I s (z) and I c (z) where these are the component-by-component backprojection image vector and coupled backprojection image vector respectively. We will also note the differences in mean-square error and the final image signal-to-clutter (SCR) ratios. Before we go into specific results we note one issue present in our coupled numerical reconstruction scheme. The matrix in the expression for Q in equation (4.193) is typically close to being singular. This poses some issues in finding its inverse numerically. We have implemented a regularization scheme in which we diagonally weight the matrix in order to improve its condition number. This diagonal weighting depends on a constant factor which we call our regularization parameter, similar to the terminology used in Tikhonov regularization. The choice of the regularization parameter is done for each case individually and has not yet been optimized for minimizing MSE or final image SCR. This is left as future work. Once this is optimized we expect the improvements made in using our coupled scheme to be even more significant Example One - Horizontally Polarized Target We first consider the case when the target has the orientation ê T (x) = [1, 0, 0] which is parallel to the a, or H, antenna. In Figure (4.8) we show the actual target scene on the left and then the target-embedded-in-clutter scene on the right. We

112 101 Figure 4.8: HH component of target vector and target plus clutter vector, horizontally polarized target assume that the target will not be visible when the antenna reaches the line y = 0 as this is where the target lies and at this point R x,s is parallel with ê T. We display the data obtained using the a antenna for both transmission and reception, using a for transmission and b for reception, and also the case when b is used for both processes in Figures (4.9) and (4.10). The first group of data is target-only data, Figure (4.9), and the second set has target-embedded-in-clutter, Figure (4.10). We see that indeed there appears to be no data collected when s = 38 as this is the point the flight path crosses the x-axis where the target lies. When clutter is present the target data is completely obscured. We do note however that the target is visible from almost all other points on the flight path indicating that our directional scattering assumption is not entirely accurate. However we observe that making the assumption aids in formulating the inversion scheme. In the next figure, Figure (4.11) we present the results of the standard image processing and then follow with the results of our coupled processing in Figure (4.12). We present these side-by-side with the actual target function. Here we only show the result of the HH image as the other two images are flat as expected. Note in this case we have signal-to-clutter ratio of 10dB. We observe that in Figure (4.12) the scale of pixel values of the image is about nine orders of magnitude greater than that of the scale in the standard processed image in Figure (4.11). Also note that the image is significantly more focused in the coupled processing case. We also

113 102 Figure 4.9: HH, HV, and VV target only data for the case, horizontally polarized target plot the signal-to-clutter ratio versus the mean-square error in Figure (4.13). The MSE is reduced by an order of magnitude with the coupled processing technique. This is of note because in our filter derivation we minimized the variance of the error process instead of MSE and yet the MSE is significantly reduced with our technique. We note the slight increase when the SCR is 20dB, this is most likely due to the current realization of the clutter used in that calculation. Lastly we display the final image signal-to-clutter ratio in Tables (4.1) and (4.2). We calculate final SCR by performing the reconstruction techniques on targetonly data and clutter-only data and then compare the energy in each set of images. Here we see a significant improvement in final image SCR which is clear from comparing the coupled processed image with the standard processed image. These are two key parameters when considering the success of an imaging algorithm. This

114 103 Figure 4.10: HH, HV, and VV target embedded in clutter data for the case, horizontally polarized target trend will be displayed again in the next two examples, however the method does perform best in this example.

115 104 Figure 4.11: HH image created using the standard processing vs. the true target function Figure 4.12: HH image created using the coupled processing vs. the true target function Figure 4.13: SCR vs. MSE for the standard processed images and coupled processed images respectively, horizontally polarized target

116 105 SCR(dB) Final Image SCR(dB) Table 4.1: Initial SCR in db vs. Final Standard Processed Image SCR in db, horizontally polarized target SCR(dB) Final Image SCR(dB) Table 4.2: Initial SCR in db vs. Final Coupled Processed Image SCR in db, horizontally polarized target

117 106 Figure 4.14: VV component of target vector and target plus clutter vector, vertically polarized target Example Two - Vertically Polarized Target We next consider the case when the target has the orientation ê T (x) = [0, 1, 0] which is parallel to the b antenna, or V antenna, and perpendicular to the flight path. We display the true target scene and also the target embedded in clutter scene in Figure (4.14). We expect to see much of the target in this case because of the relationship between the target orientation and the flight path. We again display the data obtained using the a antenna for both transmission and reception, using a for transmission and b for reception, and also the case when b is used for both processes in Figures (4.15) and (4.16). Also again the first group of data is target-only data, and the second set has target-embedded-in-clutter. We see that in this case the target is visible for almost all slow-time values, or for the length of the flight path. We also note that the addition of clutter does not completely obscure data from the target in Figure (4.16). Since there is significant information in all channels we do not expect our coupled processing to provide that much of an advantage over standard processing. In the Figure (4.17) we again present the results of the standard image processing and then in Figure (4.18) the results of our coupled processing. These are shown side-by-side with the actual target function. Here we only show the result of the VV image as the other two images are flat as expected. Note in this case we have signal-to-clutter ratio of 20dB. In this case, as expected, the difference between

118 107 Figure 4.15: HH, HV, and VV target only data for the case, vertically polarized target the two schemes is not as obvious. The scale on both images is the same. We also plot the signal-to-clutter ratio versus the mean-square error in Figure (4.19). Here we see a slight reduction in mean-square error, however again the improvement is not as noticeable. Finally in Tables (4.3) and (4.4) we see that final image SCR is improved again slightly with our scheme. This result is expected as the target is very visible in this scattering scenario.

119 108 Figure 4.16: HH, HV, and VV target embedded in clutter data for the case, vertically polarized target Figure 4.17: VV image created using the standard processing vs. the true target function

120 109 Figure 4.18: VV image created using the coupled processing vs. the true target function Figure 4.19: SCR vs. MSE for the standard processed images and coupled processed images respectively, vertically polarized target SCR(dB) Final Image SCR(dB) Table 4.3: Initial SCR in db vs. Final Standard Processing Image SCR in db, vertically polarized target

121 110 SCR(dB) Final Image SCR(dB) Table 4.4: Initial SCR in db vs. Final Coupled Processing Image SCR in db, vertically polarized target

122 111 Figure 4.20: HV component of target vector and target plus clutter vector, 45 polarized target Example Three - 45 Polarized Target Our third example considers the case when the target has orientation ê T = [1/ 2, 1/ 2, 0]. In this case we expect the coupled processing to aid even more in target reconstruction as there is more information to be gained by using all three data sets to construct each target vector element. In Figure (4.20) we show the true target scene and the target embedded in clutter scene. Next we display the target-only and also target embedded-in-clutter data in Figures (4.21) and (4.22). We see in this case that the target is not visible for much of the flight path and also the addition of clutter completely obscures the target. However there is data in all three channels so we expect to see some improvement by using our coupled processing. Next we display example images processed using the two different techniques. Here we show only the result of the HV image as the other two images are almost identical. These are shown in Figures (4.23) and (4.24). Note in this case we have signal-to-clutter ratio of 20dB. In this case both algorithms struggle to reconstruct the target however we note that our coupled scheme is able to properly display the orientation of the target while the standard processing fails in this respect. Next we plot the mean-square error versus SCR in Figure (4.25). Here we see slight improvement again when using our scheme. We do not expect that our scheme will improve MSE much as the target is not visible often and therefore the

123 112 amount of data available for reconstruction is minimal in all channels. Lastly we calculate the final image SCR for each type of image and display the results in Tables (4.5) and (4.6). We again see the same results as the previous two examples with an improvement in final image SCR. Again we do expect the gain to be significant in this case due to the lack of data. We note though that if the target has an orientation that is not along the coordinate axes the additional information used in our reconstruction scheme aids in producing a more accurate target image as we are able to properly reconstruct target orientation using our method. Figure 4.21: HH, HV, and VV target only data, 45 polarized target

Radar Imaging with Independently Moving Transmitters and Receivers

Radar Imaging with Independently Moving Transmitters and Receivers Radar Imaging with Independently Moving Transmitters and Receivers Margaret Cheney Department of Mathematical Sciences Rensselaer Polytechnic Institute Troy, NY 12180 Email: cheney@rpi.edu Birsen Yazıcı

More information

Part 2 Introduction to Microlocal Analysis

Part 2 Introduction to Microlocal Analysis Part 2 Introduction to Microlocal Analysis Birsen Yazıcı & Venky Krishnan Rensselaer Polytechnic Institute Electrical, Computer and Systems Engineering August 2 nd, 2010 Outline PART II Pseudodifferential

More information

A Mathematical Tutorial on. Margaret Cheney. Rensselaer Polytechnic Institute and MSRI. November 2001

A Mathematical Tutorial on. Margaret Cheney. Rensselaer Polytechnic Institute and MSRI. November 2001 A Mathematical Tutorial on SYNTHETIC APERTURE RADAR (SAR) Margaret Cheney Rensselaer Polytechnic Institute and MSRI November 2001 developed by the engineering community key technology is mathematics unknown

More information

ECE531 Lecture 4b: Composite Hypothesis Testing

ECE531 Lecture 4b: Composite Hypothesis Testing ECE531 Lecture 4b: Composite Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute 16-February-2011 Worcester Polytechnic Institute D. Richard Brown III 16-February-2011 1 / 44 Introduction

More information

When is the single-scattering approximation valid? Allan Greenleaf

When is the single-scattering approximation valid? Allan Greenleaf When is the single-scattering approximation valid? Allan Greenleaf University of Rochester, USA Mathematical and Computational Aspects of Radar Imaging ICERM October 17, 2017 Partially supported by DMS-1362271,

More information

Part 2 Introduction to Microlocal Analysis

Part 2 Introduction to Microlocal Analysis Part 2 Introduction to Microlocal Analysis Birsen Yazıcı& Venky Krishnan Rensselaer Polytechnic Institute Electrical, Computer and Systems Engineering March 15 th, 2010 Outline PART II Pseudodifferential(ψDOs)

More information

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)? ECE 830 / CS 76 Spring 06 Instructors: R. Willett & R. Nowak Lecture 3: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we

More information

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN A Thesis Presented to The Academic Faculty by Bryan Larish In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary ECE 830 Spring 207 Instructor: R. Willett Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics Executive summary In the last lecture we saw that the likelihood

More information

Detection theory 101 ELEC-E5410 Signal Processing for Communications

Detection theory 101 ELEC-E5410 Signal Processing for Communications Detection theory 101 ELEC-E5410 Signal Processing for Communications Binary hypothesis testing Null hypothesis H 0 : e.g. noise only Alternative hypothesis H 1 : signal + noise p(x;h 0 ) γ p(x;h 1 ) Trade-off

More information

Books. B. Borden, Radar Imaging of Airborne Targets, Institute of Physics, 1999.

Books. B. Borden, Radar Imaging of Airborne Targets, Institute of Physics, 1999. Books B. Borden, Radar Imaging of Airborne Targets, Institute of Physics, 1999. C. Elachi, Spaceborne Radar Remote Sensing: Applications and Techniques, IEEE Press, New York, 1987. W. C. Carrara, R. G.

More information

CFAR TARGET DETECTION IN TREE SCATTERING INTERFERENCE

CFAR TARGET DETECTION IN TREE SCATTERING INTERFERENCE CFAR TARGET DETECTION IN TREE SCATTERING INTERFERENCE Anshul Sharma and Randolph L. Moses Department of Electrical Engineering, The Ohio State University, Columbus, OH 43210 ABSTRACT We have developed

More information

7. introduction to 3D scattering 8. ISAR. 9. antenna theory (a) antenna examples (b) vector and scalar potentials (c) radiation in the far field

7. introduction to 3D scattering 8. ISAR. 9. antenna theory (a) antenna examples (b) vector and scalar potentials (c) radiation in the far field .. Outline 7. introduction to 3D scattering 8. ISAR 9. antenna theory (a) antenna examples (b) vector and scalar potentials (c) radiation in the far field 10. spotlight SAR 11. stripmap SAR Dipole antenna

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information

Chapter 2 Signal Processing at Receivers: Detection Theory

Chapter 2 Signal Processing at Receivers: Detection Theory Chapter Signal Processing at Receivers: Detection Theory As an application of the statistical hypothesis testing, signal detection plays a key role in signal processing at receivers of wireless communication

More information

ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters

ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters ECE531 Lecture 6: Detection of Discrete-Time Signals with Random Parameters D. Richard Brown III Worcester Polytechnic Institute 26-February-2009 Worcester Polytechnic Institute D. Richard Brown III 26-February-2009

More information

ECE531 Screencast 9.2: N-P Detection with an Infinite Number of Possible Observations

ECE531 Screencast 9.2: N-P Detection with an Infinite Number of Possible Observations ECE531 Screencast 9.2: N-P Detection with an Infinite Number of Possible Observations D. Richard Brown III Worcester Polytechnic Institute Worcester Polytechnic Institute D. Richard Brown III 1 / 7 Neyman

More information

Electromagnetic Waves For fast-varying phenomena, the displacement current cannot be neglected, and the full set of Maxwell s equations must be used

Electromagnetic Waves For fast-varying phenomena, the displacement current cannot be neglected, and the full set of Maxwell s equations must be used Electromagnetic Waves For fast-varying phenomena, the displacement current cannot be neglected, and the full set of Maxwell s equations must be used B( t) E = dt D t H = J+ t D =ρ B = 0 D=εE B=µ H () F

More information

IN RECENT years, the presence of ambient radio frequencies

IN RECENT years, the presence of ambient radio frequencies IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 49, NO. 0, OCTOBER 20 352 Doppler-Hitchhiker: A Novel Passive Synthetic Aperture Radar Using Ultranarrowband Sources of Opportunity Ling Wang, Member,

More information

Lecture 8 Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell

Lecture 8 Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell Lecture 8 Notes, Electromagnetic Theory II Dr. Christopher S. Baird, faculty.uml.edu/cbaird University of Massachusetts Lowell 1. Scattering Introduction - Consider a localized object that contains charges

More information

Topic 3: Hypothesis Testing

Topic 3: Hypothesis Testing CS 8850: Advanced Machine Learning Fall 07 Topic 3: Hypothesis Testing Instructor: Daniel L. Pimentel-Alarcón c Copyright 07 3. Introduction One of the simplest inference problems is that of deciding between

More information

Detection theory. H 0 : x[n] = w[n]

Detection theory. H 0 : x[n] = w[n] Detection Theory Detection theory A the last topic of the course, we will briefly consider detection theory. The methods are based on estimation theory and attempt to answer questions such as Is a signal

More information

Novel spectrum sensing schemes for Cognitive Radio Networks

Novel spectrum sensing schemes for Cognitive Radio Networks Novel spectrum sensing schemes for Cognitive Radio Networks Cantabria University Santander, May, 2015 Supélec, SCEE Rennes, France 1 The Advanced Signal Processing Group http://gtas.unican.es The Advanced

More information

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1 Lectures - Week 11 General First Order ODEs & Numerical Methods for IVPs In general, nonlinear problems are much more difficult to solve than linear ones. Unfortunately many phenomena exhibit nonlinear

More information

Synthetic-aperture inversion in the presence of noise and clutter

Synthetic-aperture inversion in the presence of noise and clutter INSTITUTE OF PHYSICS PUBLISHING Inverse Problems 22 (6) 17 1729 INVERSE PROBLEMS doi:.88/266-611/22//11 Synthetic-aperture inversion in the presence of noise and clutter Birsen Yazıcı 1, Margaret Cheney

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

ECE531 Lecture 2b: Bayesian Hypothesis Testing

ECE531 Lecture 2b: Bayesian Hypothesis Testing ECE531 Lecture 2b: Bayesian Hypothesis Testing D. Richard Brown III Worcester Polytechnic Institute 29-January-2009 Worcester Polytechnic Institute D. Richard Brown III 29-January-2009 1 / 39 Minimizing

More information

An inverse scattering problem in random media

An inverse scattering problem in random media An inverse scattering problem in random media Pedro Caro Joint work with: Tapio Helin & Matti Lassas Computational and Analytic Problems in Spectral Theory June 8, 2016 Outline Introduction and motivation

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

A new class of pseudodifferential operators with mixed homogenities

A new class of pseudodifferential operators with mixed homogenities A new class of pseudodifferential operators with mixed homogenities Po-Lam Yung University of Oxford Jan 20, 2014 Introduction Given a smooth distribution of hyperplanes on R N (or more generally on a

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

I. Rayleigh Scattering. EE Lecture 4. II. Dipole interpretation

I. Rayleigh Scattering. EE Lecture 4. II. Dipole interpretation I. Rayleigh Scattering 1. Rayleigh scattering 2. Dipole interpretation 3. Cross sections 4. Other approximations EE 816 - Lecture 4 Rayleigh scattering is an approximation used to predict scattering from

More information

Two results in statistical decision theory for detecting signals with unknown distributions and priors in white Gaussian noise.

Two results in statistical decision theory for detecting signals with unknown distributions and priors in white Gaussian noise. Two results in statistical decision theory for detecting signals with unknown distributions and priors in white Gaussian noise. Dominique Pastor GET - ENST Bretagne, CNRS UMR 2872 TAMCIC, Technopôle de

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

EITN90 Radar and Remote Sensing Lecture 5: Target Reflectivity

EITN90 Radar and Remote Sensing Lecture 5: Target Reflectivity EITN90 Radar and Remote Sensing Lecture 5: Target Reflectivity Daniel Sjöberg Department of Electrical and Information Technology Spring 2018 Outline 1 Basic reflection physics 2 Radar cross section definition

More information

Evaluation of the Sacttering Matrix of Flat Dipoles Embedded in Multilayer Structures

Evaluation of the Sacttering Matrix of Flat Dipoles Embedded in Multilayer Structures PIERS ONLINE, VOL. 4, NO. 5, 2008 536 Evaluation of the Sacttering Matrix of Flat Dipoles Embedded in Multilayer Structures S. J. S. Sant Anna 1, 2, J. C. da S. Lacava 2, and D. Fernandes 2 1 Instituto

More information

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS F. C. Nicolls and G. de Jager Department of Electrical Engineering, University of Cape Town Rondebosch 77, South

More information

INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson

INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS. Michael A. Lexa and Don H. Johnson INFORMATION PROCESSING ABILITY OF BINARY DETECTORS AND BLOCK DECODERS Michael A. Lexa and Don H. Johnson Rice University Department of Electrical and Computer Engineering Houston, TX 775-892 amlexa@rice.edu,

More information

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Applied Mathematical Sciences, Vol. 4, 2010, no. 62, 3083-3093 Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process Julia Bondarenko Helmut-Schmidt University Hamburg University

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

The Linear Sampling Method and the MUSIC Algorithm

The Linear Sampling Method and the MUSIC Algorithm CODEN:LUTEDX/(TEAT-7089)/1-6/(2000) The Linear Sampling Method and the MUSIC Algorithm Margaret Cheney Department of Electroscience Electromagnetic Theory Lund Institute of Technology Sweden Margaret Cheney

More information

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1)

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1) Detection problems can usually be casted as binary or M-ary hypothesis testing problems. Applications: This chapter: Simple hypothesis

More information

1 Electromagnetic concepts useful for radar applications

1 Electromagnetic concepts useful for radar applications Electromagnetic concepts useful for radar applications The scattering of electromagnetic waves by precipitation particles and their propagation through precipitation media are of fundamental importance

More information

The Doppler effect for SAR 1

The Doppler effect for SAR 1 The Doppler effect for SAR 1 Semyon Tsynkov 2,3 3 Department of Mathematics North Carolina State University, Raleigh, NC 2016 AFOSR Electromagnetics Contractors Meeting January 5 7, 2016, Arlington, VA

More information

Contents. 1 Basic Equations 1. Acknowledgment. 1.1 The Maxwell Equations Constitutive Relations 11

Contents. 1 Basic Equations 1. Acknowledgment. 1.1 The Maxwell Equations Constitutive Relations 11 Preface Foreword Acknowledgment xvi xviii xix 1 Basic Equations 1 1.1 The Maxwell Equations 1 1.1.1 Boundary Conditions at Interfaces 4 1.1.2 Energy Conservation and Poynting s Theorem 9 1.2 Constitutive

More information

INTRODUCTION TO FURSTENBERG S 2 3 CONJECTURE

INTRODUCTION TO FURSTENBERG S 2 3 CONJECTURE INTRODUCTION TO FURSTENBERG S 2 3 CONJECTURE BEN CALL Abstract. In this paper, we introduce the rudiments of ergodic theory and entropy necessary to study Rudolph s partial solution to the 2 3 problem

More information

EE 574 Detection and Estimation Theory Lecture Presentation 8

EE 574 Detection and Estimation Theory Lecture Presentation 8 Lecture Presentation 8 Aykut HOCANIN Dept. of Electrical and Electronic Engineering 1/14 Chapter 3: Representation of Random Processes 3.2 Deterministic Functions:Orthogonal Representations For a finite-energy

More information

Solving Classification Problems By Knowledge Sets

Solving Classification Problems By Knowledge Sets Solving Classification Problems By Knowledge Sets Marcin Orchel a, a Department of Computer Science, AGH University of Science and Technology, Al. A. Mickiewicza 30, 30-059 Kraków, Poland Abstract We propose

More information

SAR Imaging of Dynamic Scenes IPAM SAR Workshop

SAR Imaging of Dynamic Scenes IPAM SAR Workshop SAR Imaging of Dynamic Scenes IPAM SAR Workshop Brett Borden, Naval Postgraduate School Margaret Cheney, Rensselaer Polytechnic Institute 7 February, 2012 Introduction Need All weather, day/night, sensor

More information

Detection Theory. Chapter 3. Statistical Decision Theory I. Isael Diaz Oct 26th 2010

Detection Theory. Chapter 3. Statistical Decision Theory I. Isael Diaz Oct 26th 2010 Detection Theory Chapter 3. Statistical Decision Theory I. Isael Diaz Oct 26th 2010 Outline Neyman-Pearson Theorem Detector Performance Irrelevant Data Minimum Probability of Error Bayes Risk Multiple

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Master s Thesis Defense. Illumination Optimized Transmit Signals for Space-Time Multi-Aperture Radar. Committee

Master s Thesis Defense. Illumination Optimized Transmit Signals for Space-Time Multi-Aperture Radar. Committee Master s Thesis Defense Illumination Optimized Transmit Signals for Space-Time Multi-Aperture Radar Vishal Sinha January 23, 2006 Committee Dr. James Stiles (Chair) Dr. Chris Allen Dr. Glenn Prescott OUTLINE

More information

MODULE -4 BAYEIAN LEARNING

MODULE -4 BAYEIAN LEARNING MODULE -4 BAYEIAN LEARNING CONTENT Introduction Bayes theorem Bayes theorem and concept learning Maximum likelihood and Least Squared Error Hypothesis Maximum likelihood Hypotheses for predicting probabilities

More information

RADAR TARGETS IN THE CONTEXT OF EARTH OBSERVATION. Dr. A. Bhattacharya

RADAR TARGETS IN THE CONTEXT OF EARTH OBSERVATION. Dr. A. Bhattacharya RADAR TARGETS IN THE CONTEXT OF EARTH OBSERVATION Dr. A. Bhattacharya 1 THE RADAR EQUATION The interaction of the incident radiation with the Earth s surface determines the variations in brightness in

More information

= F (b) F (a) F (x i ) F (x i+1 ). a x 0 x 1 x n b i

= F (b) F (a) F (x i ) F (x i+1 ). a x 0 x 1 x n b i Real Analysis Problem 1. If F : R R is a monotone function, show that F T V ([a,b]) = F (b) F (a) for any interval [a, b], and that F has bounded variation on R if and only if it is bounded. Here F T V

More information

STOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes

STOCHASTIC PROCESSES, DETECTION AND ESTIMATION Course Notes STOCHASTIC PROCESSES, DETECTION AND ESTIMATION 6.432 Course Notes Alan S. Willsky, Gregory W. Wornell, and Jeffrey H. Shapiro Department of Electrical Engineering and Computer Science Massachusetts Institute

More information

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2. ANALYSIS QUALIFYING EXAM FALL 27: SOLUTIONS Problem. Determine, with justification, the it cos(nx) n 2 x 2 dx. Solution. For an integer n >, define g n : (, ) R by Also define g : (, ) R by g(x) = g n

More information

Massachusetts Institute of Technology

Massachusetts Institute of Technology Massachusetts Institute of Technology 6.867 Machine Learning, Fall 2006 Problem Set 5 Due Date: Thursday, Nov 30, 12:00 noon You may submit your solutions in class or in the box. 1. Wilhelm and Klaus are

More information

Typical anisotropies introduced by geometry (not everything is spherically symmetric) temperature gradients magnetic fields electrical fields

Typical anisotropies introduced by geometry (not everything is spherically symmetric) temperature gradients magnetic fields electrical fields Lecture 6: Polarimetry 1 Outline 1 Polarized Light in the Universe 2 Fundamentals of Polarized Light 3 Descriptions of Polarized Light Polarized Light in the Universe Polarization indicates anisotropy

More information

Topic 4: Waves 4.3 Wave characteristics

Topic 4: Waves 4.3 Wave characteristics Guidance: Students will be expected to calculate the resultant of two waves or pulses both graphically and algebraically Methods of polarization will be restricted to the use of polarizing filters and

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 8, AUGUST For convenience and ease of presentation only, we assume that

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 8, AUGUST For convenience and ease of presentation only, we assume that IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 8, AUGUST 2006 3411 Statistical and Information-Theoretic Analysis of Resolution in Imaging Morteza Shahram, Member, IEEE, and Peyman Milanfar, Senior

More information

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao

Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics. Jiti Gao Model Specification Testing in Nonparametric and Semiparametric Time Series Econometrics Jiti Gao Department of Statistics School of Mathematics and Statistics The University of Western Australia Crawley

More information

10. Composite Hypothesis Testing. ECE 830, Spring 2014

10. Composite Hypothesis Testing. ECE 830, Spring 2014 10. Composite Hypothesis Testing ECE 830, Spring 2014 1 / 25 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve unknown parameters

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Spatial, Temporal, and Spectral Aspects of Far-Field Radar Data

Spatial, Temporal, and Spectral Aspects of Far-Field Radar Data Spatial, Temporal, and Spectral Aspects of Far-Field Radar Data Margaret Cheney 1 and Ling Wang 2 1 Departments of Mathematical Sciences 2 Department of Electrical, Computer, and Systems Engineering Rensselaer

More information

Introduction to Statistical Inference

Introduction to Statistical Inference Structural Health Monitoring Using Statistical Pattern Recognition Introduction to Statistical Inference Presented by Charles R. Farrar, Ph.D., P.E. Outline Introduce statistical decision making for Structural

More information

Optical Component Characterization: A Linear Systems Approach

Optical Component Characterization: A Linear Systems Approach Optical Component Characterization: A Linear Systems Approach Authors: Mark Froggatt, Brian Soller, Eric Moore, Matthew Wolfe Email: froggattm@lunatechnologies.com Luna Technologies, 2020 Kraft Drive,

More information

DISTRIBUTION A: Distribution approved for public release.

DISTRIBUTION A: Distribution approved for public release. AFRL-AFOSR-VA-TR-2016-0112 Ultra-Wideband Electromagnetic Pulse Propagation through Causal Media Natalie Cartwright RESEARCH FOUNDATION OF STATE UNIVERSITY OF NEW YORK THE 03/04/2016 Final Report Air Force

More information

Optimum Joint Detection and Estimation

Optimum Joint Detection and Estimation 20 IEEE International Symposium on Information Theory Proceedings Optimum Joint Detection and Estimation George V. Moustakides Department of Electrical and Computer Engineering University of Patras, 26500

More information

MULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING. Kaitlyn Beaudet and Douglas Cochran

MULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING. Kaitlyn Beaudet and Douglas Cochran MULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING Kaitlyn Beaudet and Douglas Cochran School of Electrical, Computer and Energy Engineering Arizona State University, Tempe AZ 85287-576 USA ABSTRACT The problem

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Electromagnetic fields and waves

Electromagnetic fields and waves Electromagnetic fields and waves Maxwell s rainbow Outline Maxwell s equations Plane waves Pulses and group velocity Polarization of light Transmission and reflection at an interface Macroscopic Maxwell

More information

If there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell,

If there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell, Recall The Neyman-Pearson Lemma Neyman-Pearson Lemma: Let Θ = {θ 0, θ }, and let F θ0 (x) be the cdf of the random vector X under hypothesis and F θ (x) be its cdf under hypothesis. Assume that the cdfs

More information

The Factorization Method for Inverse Scattering Problems Part I

The Factorization Method for Inverse Scattering Problems Part I The Factorization Method for Inverse Scattering Problems Part I Andreas Kirsch Madrid 2011 Department of Mathematics KIT University of the State of Baden-Württemberg and National Large-scale Research Center

More information

Fourier Approach to Wave Propagation

Fourier Approach to Wave Propagation Phys 531 Lecture 15 13 October 005 Fourier Approach to Wave Propagation Last time, reviewed Fourier transform Write any function of space/time = sum of harmonic functions e i(k r ωt) Actual waves: harmonic

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

Wave equation techniques for attenuating multiple reflections

Wave equation techniques for attenuating multiple reflections Wave equation techniques for attenuating multiple reflections Fons ten Kroode a.tenkroode@shell.com Shell Research, Rijswijk, The Netherlands Wave equation techniques for attenuating multiple reflections

More information

EE/Ge 157 b. Week 2. Polarimetric Synthetic Aperture Radar (2)

EE/Ge 157 b. Week 2. Polarimetric Synthetic Aperture Radar (2) EE/Ge 157 b Week 2 Polarimetric Synthetic Aperture Radar (2) COORDINATE SYSTEMS All matrices and vectors shown in this package are measured using the backscatter alignment coordinate system. This system

More information

Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2)

Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2) Lectures on Machine Learning (Fall 2017) Hyeong In Choi Seoul National University Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2) Topics to be covered:

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Physics 8.286: The Early Universe October 27, 2013 Prof. Alan Guth PROBLEM SET 6

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Physics 8.286: The Early Universe October 27, 2013 Prof. Alan Guth PROBLEM SET 6 MASSACHUSETTS INSTITUTE OF TECHNOLOGY Physics Department Physics 8.86: The Early Universe October 7, 013 Prof. Alan Guth PROBLEM SET 6 DUE DATE: Monday, November 4, 013 READING ASSIGNMENT: Steven Weinberg,

More information

Mathematical Tripos, Part IB : Electromagnetism

Mathematical Tripos, Part IB : Electromagnetism Mathematical Tripos, Part IB : Electromagnetism Proof of the result G = m B Refer to Sec. 3.7, Force and couples, and supply the proof that the couple exerted by a uniform magnetic field B on a plane current

More information

1.1 Basis of Statistical Decision Theory

1.1 Basis of Statistical Decision Theory ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 1: Introduction Lecturer: Yihong Wu Scribe: AmirEmad Ghassami, Jan 21, 2016 [Ed. Jan 31] Outline: Introduction of

More information

Digital Transmission Methods S

Digital Transmission Methods S Digital ransmission ethods S-7.5 Second Exercise Session Hypothesis esting Decision aking Gram-Schmidt method Detection.K.K. Communication Laboratory 5//6 Konstantinos.koufos@tkk.fi Exercise We assume

More information

APPENDIX 2.1 LINE AND SURFACE INTEGRALS

APPENDIX 2.1 LINE AND SURFACE INTEGRALS 2 APPENDIX 2. LINE AND URFACE INTEGRAL Consider a path connecting points (a) and (b) as shown in Fig. A.2.. Assume that a vector field A(r) exists in the space in which the path is situated. Then the line

More information

Kirchhoff, Fresnel, Fraunhofer, Born approximation and more

Kirchhoff, Fresnel, Fraunhofer, Born approximation and more Kirchhoff, Fresnel, Fraunhofer, Born approximation and more Oberseminar, May 2008 Maxwell equations Or: X-ray wave fields X-rays are electromagnetic waves with wave length from 10 nm to 1 pm, i.e., 10

More information

Wide-band pulse-echo imaging with distributed apertures in multi-path environments

Wide-band pulse-echo imaging with distributed apertures in multi-path environments IOP PUBLISHING (28pp) INVERSE PROBLEMS doi:10.1088/0266-5611/24/4/045013 Wide-band pulse-echo imaging with distributed apertures in multi-path environments T Varslot 1, B Yazıcı 1 and M Cheney 2 1 Department

More information

Ch. 5 Hypothesis Testing

Ch. 5 Hypothesis Testing Ch. 5 Hypothesis Testing The current framework of hypothesis testing is largely due to the work of Neyman and Pearson in the late 1920s, early 30s, complementing Fisher s work on estimation. As in estimation,

More information

Sparse Linear Models (10/7/13)

Sparse Linear Models (10/7/13) STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have completed studying digital sources from information theory viewpoint We have learnt all fundamental principles for source coding, provided by information theory Practical

More information

So far, we have considered three basic classes of antennas electrically small, resonant

So far, we have considered three basic classes of antennas electrically small, resonant Unit 5 Aperture Antennas So far, we have considered three basic classes of antennas electrically small, resonant (narrowband) and broadband (the travelling wave antenna). There are amny other types of

More information

Chapter 7. Hypothesis Testing

Chapter 7. Hypothesis Testing Chapter 7. Hypothesis Testing Joonpyo Kim June 24, 2017 Joonpyo Kim Ch7 June 24, 2017 1 / 63 Basic Concepts of Testing Suppose that our interest centers on a random variable X which has density function

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Bayesian Linear Regression [DRAFT - In Progress]

Bayesian Linear Regression [DRAFT - In Progress] Bayesian Linear Regression [DRAFT - In Progress] David S. Rosenberg Abstract Here we develop some basics of Bayesian linear regression. Most of the calculations for this document come from the basic theory

More information

Multivariable Calculus Notes. Faraad Armwood. Fall: Chapter 1: Vectors, Dot Product, Cross Product, Planes, Cylindrical & Spherical Coordinates

Multivariable Calculus Notes. Faraad Armwood. Fall: Chapter 1: Vectors, Dot Product, Cross Product, Planes, Cylindrical & Spherical Coordinates Multivariable Calculus Notes Faraad Armwood Fall: 2017 Chapter 1: Vectors, Dot Product, Cross Product, Planes, Cylindrical & Spherical Coordinates Chapter 2: Vector-Valued Functions, Tangent Vectors, Arc

More information

Composite Hypotheses and Generalized Likelihood Ratio Tests

Composite Hypotheses and Generalized Likelihood Ratio Tests Composite Hypotheses and Generalized Likelihood Ratio Tests Rebecca Willett, 06 In many real world problems, it is difficult to precisely specify probability distributions. Our models for data may involve

More information

Derivation of the General Propagation Equation

Derivation of the General Propagation Equation Derivation of the General Propagation Equation Phys 477/577: Ultrafast and Nonlinear Optics, F. Ö. Ilday, Bilkent University February 25, 26 1 1 Derivation of the Wave Equation from Maxwell s Equations

More information

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro On rational approximation of algebraic functions http://arxiv.org/abs/math.ca/0409353 Julius Borcea joint work with Rikard Bøgvad & Boris Shapiro 1. Padé approximation: short overview 2. A scheme of rational

More information

Stochastic Histories. Chapter Introduction

Stochastic Histories. Chapter Introduction Chapter 8 Stochastic Histories 8.1 Introduction Despite the fact that classical mechanics employs deterministic dynamical laws, random dynamical processes often arise in classical physics, as well as in

More information

Antennas and Propagation. Chapter 2: Basic Electromagnetic Analysis

Antennas and Propagation. Chapter 2: Basic Electromagnetic Analysis Antennas and Propagation : Basic Electromagnetic Analysis Outline Vector Potentials, Wave Equation Far-field Radiation Duality/Reciprocity Transmission Lines Antennas and Propagation Slide 2 Antenna Theory

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and Estimation I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 22, 2015

More information