Sensitivity Considerations in Compressed Sensing

Similar documents
Virtual Array Processing for Active Radar and Sonar Sensing

Compressed Sensing, Sparse Inversion, and Model Mismatch

Chapter 3 Compressed Sensing, Sparse Inversion, and Model Mismatch

Threshold Effects in Parameter Estimation from Compressed Data

SIGNALS with sparse representations can be recovered

Optimum Passive Beamforming in Relation to Active-Passive Data Fusion

J. Liang School of Automation & Information Engineering Xi an University of Technology, China

A ROBUST BEAMFORMER BASED ON WEIGHTED SPARSE CONSTRAINT

DOPPLER RESILIENT GOLAY COMPLEMENTARY PAIRS FOR RADAR

Robust Range-rate Estimation of Passive Narrowband Sources in Shallow Water

Optimal Time Division Multiplexing Schemes for DOA Estimation of a Moving Target Using a Colocated MIMO Radar

ADAPTIVE ANTENNAS. SPATIAL BF

HEURISTIC METHODS FOR DESIGNING UNIMODULAR CODE SEQUENCES WITH PERFORMANCE GUARANTEES

Joint Direction-of-Arrival and Order Estimation in Compressed Sensing using Angles between Subspaces

Mean Likelihood Frequency Estimation

MULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING. Kaitlyn Beaudet and Douglas Cochran

Near Optimal Adaptive Robust Beamforming

ML ESTIMATION AND CRB FOR NARROWBAND AR SIGNALS ON A SENSOR ARRAY

FAST AND ACCURATE DIRECTION-OF-ARRIVAL ESTIMATION FOR A SINGLE SOURCE

Real-Valued Khatri-Rao Subspace Approaches on the ULA and a New Nested Array

Design Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation

Two-Dimensional Sparse Arrays with Hole-Free Coarray and Reduced Mutual Coupling

A GLRT FOR RADAR DETECTION IN THE PRESENCE OF COMPOUND-GAUSSIAN CLUTTER AND ADDITIVE WHITE GAUSSIAN NOISE. James H. Michels. Bin Liu, Biao Chen

Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach

Improved Unitary Root-MUSIC for DOA Estimation Based on Pseudo-Noise Resampling

DOA Estimation of Quasi-Stationary Signals Using a Partly-Calibrated Uniform Linear Array with Fewer Sensors than Sources

2-D SENSOR POSITION PERTURBATION ANALYSIS: EQUIVALENCE TO AWGN ON ARRAY OUTPUTS. Volkan Cevher, James H. McClellan

Local detectors for high-resolution spectral analysis: Algorithms and performance

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

Adaptive beamforming for uniform linear arrays with unknown mutual coupling. IEEE Antennas and Wireless Propagation Letters.

Asymptotic Analysis of the Generalized Coherence Estimate

DISSERTATION PARAMETER ESTIMATION FROM COMPRESSED AND SPARSE MEASUREMENTS. Submitted by. Pooria Pakrooh

STONY BROOK UNIVERSITY. CEAS Technical Report 829

EXTENSION OF NESTED ARRAYS WITH THE FOURTH-ORDER DIFFERENCE CO-ARRAY ENHANCEMENT

A NEW BAYESIAN LOWER BOUND ON THE MEAN SQUARE ERROR OF ESTIMATORS. Koby Todros and Joseph Tabrikian

CHANNEL parameter estimation enjoys significant attention

The Mean-Squared-Error of Autocorrelation Sampling in Coprime Arrays

Overview of Beamforming

Performance Analysis for Strong Interference Remove of Fast Moving Target in Linear Array Antenna

DOA AND POLARIZATION ACCURACY STUDY FOR AN IMPERFECT DUAL-POLARIZED ANTENNA ARRAY. Miriam Häge, Marc Oispuu

DOA Estimation using MUSIC and Root MUSIC Methods

SEQUENTIAL DESIGN OF EXPERIMENTS FOR A RAYLEIGH INVERSE SCATTERING PROBLEM. Raghuram Rangarajan, Raviv Raich, and Alfred O.

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

An Invariance Property of the Generalized Likelihood Ratio Test

Lecture 8: MIMO Architectures (II) Theoretical Foundations of Wireless Communications 1. Overview. Ragnar Thobaben CommTh/EES/KTH

The Probability Distribution of the MVDR Beamformer Outputs under Diagonal Loading. N. Raj Rao (Dept. of Electrical Engineering and Computer Science)

Information-Preserving Transformations for Signal Parameter Estimation

Constrained Quadratic Minimizations for Signal Processing and Communications

New Cramér-Rao Bound Expressions for Coprime and Other Sparse Arrays

CO-OPERATION among multiple cognitive radio (CR)

Real Aperture Imaging using Sparse Optimisation with Application to Low-Angle Tracking

CHAPTER 3 ROBUST ADAPTIVE BEAMFORMING

Generalized Design Approach for Fourth-order Difference Co-array

OPTIMAL ADAPTIVE TRANSMIT BEAMFORMING FOR COGNITIVE MIMO SONAR IN A SHALLOW WATER WAVEGUIDE

ROBUST ADAPTIVE BEAMFORMING BASED ON CO- VARIANCE MATRIX RECONSTRUCTION FOR LOOK DIRECTION MISMATCH

Fast 2-D Direction of Arrival Estimation using Two-Stage Gridless Compressive Sensing

Detection and Localization of Tones and Pulses using an Uncalibrated Array

Root-MUSIC Time Delay Estimation Based on Propagator Method Bin Ba, Yun Long Wang, Na E Zheng & Han Ying Hu

THE BAYESIAN ABEL BOUND ON THE MEAN SQUARE ERROR

IEEE copyright notice

computation of the algorithms it is useful to introduce some sort of mapping that reduces the dimension of the data set before applying signal process

Robust Adaptive Beamforming Based on Interference Covariance Matrix Reconstruction and Steering Vector Estimation

Let us consider a typical Michelson interferometer, where a broadband source is used for illumination (Fig. 1a).

Sparse DOA estimation with polynomial rooting

Co-Prime Arrays and Difference Set Analysis

On Information Maximization and Blind Signal Deconvolution

arxiv: v1 [cs.it] 6 Nov 2016

REVEAL. Receiver Exploiting Variability in Estimated Acoustic Levels Project Review 16 Sept 2008

Detection & Estimation Lecture 1

ASYMPTOTIC PERFORMANCE ANALYSIS OF DOA ESTIMATION METHOD FOR AN INCOHERENTLY DISTRIBUTED SOURCE. 2πd λ. E[ϱ(θ, t)ϱ (θ,τ)] = γ(θ; µ)δ(θ θ )δ t,τ, (2)

Robust multichannel sparse recovery

Estimation of the Optimum Rotational Parameter for the Fractional Fourier Transform Using Domain Decomposition

MODEL ORDER ESTIMATION FOR ADAPTIVE RADAR CLUTTER CANCELLATION. Kelly Hall, 4 East Alumni Ave. Kingston, RI 02881

SELECTIVE ANGLE MEASUREMENTS FOR A 3D-AOA INSTRUMENTAL VARIABLE TMA ALGORITHM

Sparse linear models

A ROBUST TEST FOR NONLINEAR MIXTURE DETECTION IN HYPERSPECTRAL IMAGES

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Sound Source Tracking Using Microphone Arrays

Acoustic Signal Processing. Algorithms for Reverberant. Environments

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

Sensor Tasking and Control

Sparse Sensing for Statistical Inference

Frames and Phase Retrieval

Performance of Reduced-Rank Linear Interference Suppression

Introduction to Compressed Sensing

Iterative Algorithms for Radar Signal Processing

The Selection of Weighting Functions For Linear Arrays Using Different Techniques

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

STA414/2104 Statistical Methods for Machine Learning II

OPTIMAL SENSOR-TARGET GEOMETRIES FOR DOPPLER-SHIFT TARGET LOCALIZATION. Ngoc Hung Nguyen and Kutluyıl Doğançay

Direction of Arrival Estimation: Subspace Methods. Bhaskar D Rao University of California, San Diego

EXTENDED GLRT DETECTORS OF CORRELATION AND SPHERICITY: THE UNDERSAMPLED REGIME. Xavier Mestre 1, Pascal Vallet 2

MAXIMUM A POSTERIORI ESTIMATION OF SIGNAL RANK. PO Box 1500, Edinburgh 5111, Australia. Arizona State University, Tempe AZ USA

sine wave fit algorithm

Compressive Parameter Estimation: The Good, The Bad, and The Ugly

ESTIMATING INDEPENDENT-SCATTERER DENSITY FROM ACTIVE SONAR REVERBERATION

Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach

A Maximum Likelihood Angle-Doppler Estimator using Importance Sampling

Detection & Estimation Lecture 1

Efficient Use Of Sparse Adaptive Filters

Transcription:

Sensitivity Considerations in Compressed Sensing Louis L. Scharf, 1 Edwin K. P. Chong, 1,2 Ali Pezeshki, 2 and J. Rockey Luo 2 1 Department of Mathematics, Colorado State University Fort Collins, CO 8523, USA 2 Department of Electrical and Computer Engineering, Colorado State University Fort Collins, CO 8523, USA Abstract In [1] [4], we considered the question of basis mismatch in compressive sensing. Our motivation was to study the effect of mismatch between the mathematical basis (or frame) in which a signal was assumed to be sparse and the physical basis in which the signal was actually sparse. We were motivated by the problem of inverting a complex space-time radar image for the field of complex scatterers that produced the image. In this case there is no apriori known basis in which the image is actually sparse, as radar scatterers do not usually agree to place their ranges and Dopplers on any apriori agreed sampling grid. The consequence is that sparsity in the physical basis is not maintained in the mathematical basis, and a sparse inversion in the mathematical basis or frame does not match up with an inversion for the field in the physical basis. In [1] [3], this effect was quantified with theorem statements about sensitivity to basis mismatch and with numerical examples for inverting time series records for their sparse set of damped complex exponential modes. These inversions were compared unfavorably to inversions using fancy linear prediction. In [4] and this paper, we continue these investigations by comparing the performance of sparse inversions of sparse images, using apriori selected frames that are mismatched to the physical basis, and by computing the Fisher information matrix for compressions of images that are sparse in a physical basis. The question that arises is, what is the sensitivity to mismatch between the unknown apriori basis in the overdetermined problem and the assumed apriori frame in the under-determined problem? How does the performance of the sparse inversion compare with the Cramer-Rao bound ()? 2) A recorded radar image can be compressed with random filters and beamformers, to be subsequently inverted for ranges, Dopplers, and DOAs of the scattering field. The question that arises is, how well does the compressed version of the space-time image preserve information about the field of scatterers? How is Fisher information preserved, or equivalently, how do s for the compressed image compare with s for the original field? In this paper, we address these questions with results from our continuing investigations. Our results suggest that a compressed set of recordings, processed according to the principles of sparse inversion, will suffer performance losses with respect to classical methods. Our continuing aim is to manage these losses by exploiting apriori knowledge to tailor the compression to the problem. I. INTRODUCTION There are many ways that a sparse inversion can arise in signal processing. For example: 1) An over-determined separable linear model is replaced by an under-determined linear model. In the separable linear model, parameters such as complex scattering coefficients compose a small number of modes that are nonlinearly modulated by mode parameters such as frequency, wavenumber, and delay. In the approximating linear model, scattering coefficients compose a large number of modes whose frequencies, wavenumbers, and delays are chosen apriori. In the over-determined separable linear model, the problem is to estimate the mode parameters and the scattering coefficients. In the under-determined linear model, the mode parameters are assumed to be determined already at an agreed grid spacing, so the only problem is to estimate the complex scattering coefficients for the few modes that are active and call the parameter estimates the parameters belonging to these active modes. This paper was presented at the 211 Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, Nov. 6-9, 211. The work reported here is supported by DARPA under contract N661-11-C-423 and by NSF under grant CCF-118472. II. FROM AN OVER-DETERMINED LINEAR SEPARABLE MODEL TO AN UNDER-DETERMINED LINEAR MODEL Begin with the separable linear model y = x + n; x = A(φ)θ where x,n,y are elements of the set C n. The parameters θ C q are parameters that linearly compose the signal x, and the parameters φ = [φ 1,..., φ p ] T C p, with φ i Φ for i = 1,...,p, are parameters that nonlinearly modulate the modes of the model matrix A(φ). An interesting special case is when p = q and A(φ) C n q is structured as A(φ) = [a(φ 1 ),a( ),,a(φ p )], where a(φ i ) = [1, z(φ i ), z 2 (φ i ), z n 1 (φ i )] T, with z(φ i ) C. This models DOA modulation in a linear array or linear angle modulation in a time series. There is a great number of methods of modal analysis in the published literature for estimating the parameters (θ, φ), and their performance is well understood. There are general formulas for the Fisher information matrix and the on the error covariance matrix (see, e.g., [5]), and in specific cases where the matrix A(φ) is composed of damped complex exponential modes, there are special formulas that have been numerically evaluated for

2 concentration ellipses [6]. Generally, the Fisher information matrix in the model y : CN n [A(φ)θ, σ 2 I] may be written as [7] J y = 1 σ 2GH G where G is a matrix of sensitivities: G = [G φ,g θ ], G φ = [g φ [1],g φ [2],...,g φ [p]], G θ = [g θ [1],g θ [2],...,g θ [q]]. The sensitivity vectors g are partial derivatives of A(φ)θ with respect to the elements of φ and θ. Thus the Fisher information matrix J = 1 σ G H G is a Grammian of sensitivities. There 2 is a nice geometrical interpretation [7], showing that Fisher information is poor when a one-dimensional subspace g φ [i] lies near to the multidimensional subspace spanned by all other sensitivity vectors. This explains the difficulty in resolving closely-spaced modes. Suppose the over-determined linear separable model is replaced by the under-determined linear model y = x + n; x = Bc where the nonlinearly-modulated mode matrix A(φ) = [a 1,a 2,...,a p ], with a i = a(φ i ) for i = 1, 2,..., p, has been replaced by the fixed frame B = [b 1,b 2,...,b N ], N > n > p, with the b i a gridding of the set {a(φ), φ Φ}. Whether or not the measurement y is compressed, there is the question of inverting each of these models for their mode parameters φ. Generally, a sparse inversion inverts for sparse c in the model x = Bc under a constraint on the fit of Bc to y. For example, one may solve the following l 1 minimization problem: min c 1 s.t. y Bc 2 2 ǫ2. Once sparse c is returned, the estimate for φ is extracted from the support set of the identified c. As the gridding of the mode space never produces an exact match between the modes a i and the gridded modes b i, there is a question of sensitivity to mismatch. These questions have been answered analytically and experimentally in [1] [3], but no Monte Carlo simulation of estimator errors has been undertaken to date. In [4] and this paper, we present a small subset of our simulations for estimators and estimator errors using sparse inversions based on a second order cone program for l 1 minimization and sparse inversion based on orthogonal matching pursuit (OMP). In Figs. 1(a) (c), we present experimental mean-squared estimation errors for estimating the frequency of an f 1 =.5 Hz, unit amplitude, complex exponential from a 25 sample measurement of this complex exponential, plus an interfering f 2 =.52 Hz, unit-amplitude, complex exponential, plus complex proper Gaussian noise of variance σ 2. The.2 Hz separation between the two tones is a separation of half the Rayleigh limit of (1/25) Hz. For reference, the mean-squared error (MSE) of errors uniformly distributed over this Rayleigh limit is 34 db. SNR(dB) is defined to be 1 log 1 (1/σ 2 ). Mean-squared error is plotted as 1 log 1 (MSE). The linear dotted line is the. The parameter L denotes the ratio N/n, so it may be called an expansion factor. At expansion factor L, the number of modes in the frame B is nl, corresponding to complex exponentials spaced at 1/nL Hz. The resolution of the frame may be said to be 1/nL Hz. These modes are placed so that there is always a half-cell mismatch between the actual frequencies and the mode frequencies. The width of the half-cell (1/2nL) is indicated in Figs. 1(a) (c) for the various L values as asterisks on the right side of the plot. Fig. 1(a) demonstrates that at a low expansion factor of L = 2 the l 1 inversions are noise-defeated below 5 db and resolutionlimited above 5 db, meaning they are limited by the resolution of the frame. That is, below 5 db, mean-squared error is biassquared plus variance, while above 5 db, mean-squared error is bias-squared due to coarse-grained resolution of the frame. At L = 4, the inversions are noise-defeated below 5 db, noise-limited from 5 to 1 db, and resolution-limited above 1 db. As the expansion factor is increased, corresponding to finer- and finer-grained resolution in frequency, the frame loses its incoherence, meaning the dimension of the null space increases so much that there are many sparse inversions that meet a fitting constraint. As a consequence, we anticipate that for larger values of L the mean-squared error never reaches its resolution limit, as variance overtakes bias-squared to produce mean-squared errors that are larger than those of inversions that used smaller expansion factors. This suggests that there is a clear limit to how much bias-squared can be reduced with frame expansion, before variance overtakes bias-squared to produce degraded l 1 inversions. Figs. 1(b) and (c) make these points for OMP inversions. The interesting thing about these results is that OMP inversions extend the threshold behavior of the inversions, they track the more closely in the noise-limited region, and they reach their resolution limit for larger values of L before reaching their null-space limit at high SNRs. For example, at the null-space limit has not yet been reached at SNR= 3 db, whereas for L = 14, the null-space limit is reached before the resolution limit can be reached. In all of these experiments, the fitting error is matched to the noise variance. With mismatch between fitting error and noise variance the results are more pessimistic. The results suggest that the replacement of an over-determined separable linear model with an under-determined linear model, which is then sparsely inverted, must be very carefully managed to find an appropriate expansion factor and fitting parameter, if performance loss with respect to classical methods is to be managed. Remark. The results reported in Fig. 1 are actually too optimistic, as the mode amplitude are equal. For a weak mode in the presence of a strong interfering mode, the results are much worse.

3 l 1 SOCP III. FROM MEASUREMENT SPACE TO RANDOM BEAM SPACE 5 6 L = 2 L = 4 L = 6 7 5 7 1 15 2 3 5 6 (a) l 1 inversions for L = 2, 4, 6, 8 OMP L = 2 L = 4 L = 6 7 5 7 1 15 2 3 5 6 (b) OMP inversions for L = 2, 4, 6, 8 OMP L = 14 L = 12 7 5 7 1 15 2 3 (c) OMP inversions for, 12, 14 Fig. 1. Experimental mean-squared estimation errors for estimating the frequency of an f 1 =.5 Hz, unit amplitude, complex exponential from a 25 sample measurement of this complex exponential, plus an interfering f 2 =.52 Hz, unit-amplitude, complex exponential, plus complex proper Gaussian noise of variance σ 2. Experimental mean-squared errors are shown for l 1 and OMP inversions for different expansion factors L. The asterisks indicate the width of the half-cell (1/2nL) for the various L values. Beam space processing replaces a set of n spatial measurements y C n with a set of m n beam space measurements z = Ty, T C m n, where the rows of the beamforming matrix T are beam-steering vectors. Conventionally these rows are designed to form spatial beams, and typically the beam centers are spaced at m equally-spaced resolutions of electrical angle between ( π, π]. So the question is, how does the Fisher information matrix for the DOA parameters of a sparse scattering or radiating field, computed from the original measurements y, compare with the Fisher information matrix for these same parameters, computed from the beam space measurements z = Ty? And how do these results compare for a conventional choice of the beamforming matrix and for a random choice? We offer answers to both of these questions. For any transformation T, the effect on the Fisher information matrix in the model y : CN n [A(φ)θ, σ 2 I] is [7] J Ty = 1 σ 2GH P T HG where P T H = T H (TT H ) 1 T is a projection onto the range space of T H, and G is a matrix of sensitivities. It is the effect of the transformation T on the sensitivity vectors, TG, and on the noise covariance, TT H, that determines the impact of the transformation on Fisher information. Remark. If the parameters φ modulate a covariance matrix R(φ) in the second-order model y : CN N [,R(φ)], then the effect of transformation T is to produce Fisher information (J Ty ) ij = tr{(trt H ) 1 T R φ i T H (TRT H ) 1 T R φ j T H }. Figs. 2(a) (d) demonstrate the results of random beamsteering with a transformation T. The experiment is the following: two plane waves propagate across the face of an n = 64 element linear array or two linearly modulated complex exponentials populate a time series. One of the plane waves arrives on boresight and the other arrives at electrical angle φ, with φ swept over ( 3 2π n, 32π n ] with a step size much finer than the Rayleigh limit of 2π n radians. For each such angle, the for estimating the boresight angle is computed and plotted, with and without beamsteering with compressive beamformer T. In our experiments, T is chosen to be a random complex Gaussian matrix, a Slepian matrix of approximate beamwidth 2π/m (see, e.g., [8] and [9]), or a monopulse beamsteering consisting of a sum beam and a difference beam (see, e.g., [1]). The SNR is normalized to db, as it is only the relative values of the that interest us. Moving from Fig. 2(a) to Fig. 2(d), the compression factor n/m is increased from 1 to 8 for a random complex Gaussian compressor. In Fig. 2(a), the compression factor is n/m = 1. The random beamformers build a nonsingular matrix, with probability one. So the is the of the original measurements, without beamforming. In Fig. 2(d), the compression factor is n/m = 8. The band of s

4 3 1x Compression 3 2x Compression 2 2 1 1 1 1.8.6.4.2.2.4.6.8.8.6.4.2.2.4.6.8 (a) (b) Compression by 2 3 4x Compression 3 8x Compression 2 2 1 1 1 1.8.6.4.2.2.4.6.8.8.6.4.2.2.4.6.8 (c) Compression by 4 (d) Compression by 8 Fig. 2. Cramer-Rao lower bound for estimating the electrical angle of source at boresight, as the electrical angle φ of an interfering source varies from 3 2π to 32pi. Each red curve in a subfigure corresponds to one realization of a random complex Gaussian beamforming matrix T of size m by n. A total n n of 1 realizations is superimposed in each subfigure. (a) (m = n = 64), (b) Compression by 2 (m = n/2 = 32), (c) Compression by 4 (m = n/4 = 16), and (d) Compression by 8 (m = n/8 = 8). 2 1x Compression 2 2x Compression 1 1 1 1 2 1.8.6.4.2.2.4.6.8.8.6.4.2.2.4.6.8 (a) (b) Compression by 2 4x Compression 2 1 8x Compression 1 1.8.6.4.2.2.4.6.8.8.6.4.2.2.4.6.8 (c) Compression by 4 (d) Compression by 8 Fig. 3. Cramer-Rao lower bound for estimating the electrical angle of source at boresight, as the electrical angle φ of an interfering source varies from 3 2π to 32pi, from compressive measurements made by a Slepian beamformer of approximate beamwidth 2π/m. (a) (m = n = 64), (b) n n Compression by 2 (m = n/2 = 32), (c) Compression by 4 (m = n/4 = 16), and (d) Compression by 8 (m = n/8 = 8).

5 shows s for many random choices of T. The variation between good and bad random choices of T is 6 db, and the best choice has a loss of 9 db with respect to the for the original measurements. This represents an increase in variance in estimating the boresight angle by a factor of at least 9 radians-squared, or an increase in standard deviation by a factor of at least 9 radians. These results quantify the performance loss of compressed random beamforming, providing a basis for selecting a compression factor that meets performance specifications. Figs. 3(a) (d) illustrate the same results for Slepian beamforming and Figs. 4(a) and (b) illustrate the results for monopulse beamsteering. Within the main beam defined by the Rayleigh limit, the increase in for the Slepian beamformer is negligible, while the monopulse beamformer outperforms the uncompressed array processing by filtering out broad wavenumber noise. This shows that if the vicinity of closely-spaced sources is known, this knowledge can be exploited to tailor the compression to improve resolution. 2 1 1 5 2 1 1 Monopulse Radar.8.6.4.2.2.4.6.8 (a) Comparison near boresight Monopulse Radar and inverted for the on the variance of an estimator of boresight angle. IV. CONCLUSIONS We have reviewed two problems in signal processing that may be approximated and re-framed in such a way that a sparse inversion of compressed measurements for modal parameters can be considered. But in each case there are consequences arising from model mismatch and from loss of Fisher information. Our results illuminate these consequences and suggest remedies based on apriori knowledge. REFERENCES [1] Y. Chi, L. L. Scharf, A. Pezeshki, and R. Calderbank, Sensitivity to basis mismatch of compressed sensing for spectrum analysis and beamforming, in Proc. 6th U.S./Australia Joint Workshop on Defence Applications of Signal Processing (DASP), Lihue, HI, Sep. 26-28 29. [2] Y. Chi, A. Pezeshki, L. L. Scharf, and R. Calderbank, Sensitivity to basis mismatch in compressed sensing, in Proc. 21 IEEE Int. Conf. Acoust., Speech, Signal Process. (ICASSP), Dallas, TX, Mar. 14-19 21, pp. 393 3933. [3] Y. Chi, L. L. Scharf, A. Pezeshki, and R. Calderbank, Sensitivity to basis mismatch in compressed sensing, IEEE Trans. Signal Processing, vol. 59, no. 5, pp. 2182 2195, May 211. [4] L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. Luo, Compressive sensing and sparse inversion in signal processing: Cautionary notes, in Proc. 7th U.S./Australia Joint Workshop on Defence Applications of Signal Processing (DASP), Coolum, Queensland, Australia, Jul. 9-14 211. [5] H. L. V. Trees and K. L. Bell, Bayesian Bounds for Parameter Estimation and Nonlinear Filtering/Tracking. IEEE Press, 27. [6] L. T. McWhorter and L. L. Scharf, Cramer-Rao bounds for deterministic modal analysis, IEEE Trans. Signal Process., vol. 41, no. 5, pp. 1847 1862, May 1993. [7] L. L. Scharf and L. M. Whorter, Geometry of the Cramer-Rao bound, Signal Processing, vol. 31, no. 3, pp. 31 311, Apr. 1993. [8] D. Slepian, Prolate spheroidal wave functions, Fourier analysis and uncertainty - V: The discrete case, Bell Syst. Tech. J., pp. 1371 143, 1978. [9] C. T. Mullis and L. L. Scharf, Quadratic estimators of the power spectrum, in Advances in Spectrum Estimation, S. Haykin, Ed. Prentice Hall, 199, vol. 1, ch. 1, pp. 1 57. [1] E. Mosca, Angle estimation in amplitude comparison monopulse system, IEEE Trans. Aerospace and Electronic Systems, vol. AES-5, no. 2, pp. 25 212, Mar. 1969. 5.3.2.1.1.2.3 (b) Comparison in a wider region Fig. 4. Cramer-Rao lower bound for estimating the electrical angle of source at boresight, as the electrical angle φ of an interfering source varies from 3 2π n to 32π, from compressive measurements made by a Monopulse n beamformer. In all of these experiments, it is the so-called deterministic Fisher information J Ty = 1 σ 2 G H P T HG that is computed