Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification

Similar documents
Least-Squares Regression on Sparse Spaces

Concentration of Measure Inequalities for. Toeplitz Matrices with Applications

u!i = a T u = 0. Then S satisfies

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

On combinatorial approaches to compressed sensing

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Topic 7: Convergence of Random Variables

ensembles When working with density operators, we can use this connection to define a generalized Bloch vector: v x Tr x, v y Tr y

Exact Topology Identification of Large-Scale Interconnected Dynamical Systems from Compressive Observations

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Capacity Analysis of MIMO Systems with Unknown Channel State Information

SYNCHRONOUS SEQUENTIAL CIRCUITS

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

ALGEBRAIC AND ANALYTIC PROPERTIES OF ARITHMETIC FUNCTIONS

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

arxiv: v1 [cs.lg] 22 Mar 2014

A New Minimum Description Length

APPLICATION of compressed sensing (CS) in radar signal

Image Denoising Using Spatial Adaptive Thresholding

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

Lower bounds on Locality Sensitive Hashing

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks

Entanglement is not very useful for estimating multiple phases

Separation of Variables

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE

Sparse Reconstruction of Systems of Ordinary Differential Equations

TOEPLITZ AND POSITIVE SEMIDEFINITE COMPLETION PROBLEM FOR CYCLE GRAPH

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods

Agmon Kolmogorov Inequalities on l 2 (Z d )

Multi-View Clustering via Canonical Correlation Analysis

The effect of dissipation on solutions of the complex KdV equation

Robustness and Perturbations of Minimal Bases

Space-time Linear Dispersion Using Coordinate Interleaving

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

Hyperbolic Systems of Equations Posed on Erroneous Curved Domains

Efficiently Decodable Non-Adaptive Threshold Group Testing

On the Observability of Linear Systems from Random, Compressive Measurements

ANALYSIS OF A GENERAL FAMILY OF REGULARIZED NAVIER-STOKES AND MHD MODELS

New Bounds for Distributed Storage Systems with Secure Repair

6 General properties of an autonomous system of two first order ODE

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Dissipative numerical methods for the Hunter-Saxton equation

Necessary and Sufficient Conditions for Sketched Subspace Clustering

Multipath Matching Pursuit

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes

Delocalization of boundary states in disordered topological insulators

Analyzing Tensor Power Method Dynamics in Overcomplete Regime

PDE Notes, Lecture #11

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

Spurious Significance of Treatment Effects in Overfitted Fixed Effect Models Albrecht Ritschl 1 LSE and CEPR. March 2009

On the number of isolated eigenvalues of a pair of particles in a quantum wire

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

Diagonalization of Matrices Dr. E. Jacobs

Optimal CDMA Signatures: A Finite-Step Approach

arxiv: v1 [physics.flu-dyn] 8 May 2014

We G Model Reduction Approaches for Solution of Wave Equations for Multiple Frequencies

Applications of the Wronskian to ordinary linear differential equations

TIME-DELAY ESTIMATION USING FARROW-BASED FRACTIONAL-DELAY FIR FILTERS: FILTER APPROXIMATION VS. ESTIMATION ERRORS

Acute sets in Euclidean spaces

Qubit channels that achieve capacity with two states

Permanent vs. Determinant

Relative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation

Systems & Control Letters

Design of an Industrial Distributed Controller Near Spatial Domain Boundaries

MEASURES WITH ZEROS IN THE INVERSE OF THEIR MOMENT MATRIX

arxiv:hep-th/ v1 3 Feb 1993

arxiv: v4 [math.pr] 27 Jul 2016

Technion - Computer Science Department - M.Sc. Thesis MSC Constrained Codes for Two-Dimensional Channels.

Signal Recovery from Permuted Observations

On the Surprising Behavior of Distance Metrics in High Dimensional Space

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)

Experimental Robustness Study of a Second-Order Sliding Mode Controller

On Characterizing the Delay-Performance of Wireless Scheduling Algorithms

A Modification of the Jarque-Bera Test. for Normality

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Optimized Schwarz Methods with the Yin-Yang Grid for Shallow Water Equations

A Review of Multiple Try MCMC algorithms for Signal Processing

SYMMETRIC KRONECKER PRODUCTS AND SEMICLASSICAL WAVE PACKETS

A simple model for the small-strain behaviour of soils

Parameter estimation: A new approach to weighting a priori information

Lecture 2: Correlated Topic Model

All s Well That Ends Well: Supplementary Proofs

SINGULAR PERTURBATION AND STATIONARY SOLUTIONS OF PARABOLIC EQUATIONS IN GAUSS-SOBOLEV SPACES

Introduction to the Vlasov-Poisson system

Euler equations for multiple integrals

Optimal Signal Detection for False Track Discrimination

Multi-View Clustering via Canonical Correlation Analysis

θ x = f ( x,t) could be written as

Stable and compact finite difference schemes

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France

Exact Signal Recovery from Sparsely Corrupted Measurements through the Pursuit of Justice

Generalizing Kronecker Graphs in order to Model Searchable Networks

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs

Multi-View Clustering via Canonical Correlation Analysis

Generalization of the persistent random walk to dimensions greater than 1

Tractability results for weighted Banach spaces of smooth functions

Transcription:

Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection an System Ientification Borhan M Sananaji, Tyrone L Vincent, an Michael B Wakin Abstract In this paper, we erive concentration of measure inequalities for compressive Toeplitz matrices having fewer rows than columns with entries rawn from an inepenent an ientically istribute ii Gaussian ranom sequence These inequalities show that the norm of a vector mappe by a Toeplitz matrix to a lower imensional space concentrates aroun its mean with a tail probability boun that ecays exponentially in the imension of the range space ivie by a factor that is a function of the sample covariance of the vector Motivate by the emerging fiel of Compressive Sensing CS, we apply these inequalities to problems involving the analysis of high-imensional systems from convolution-base compressive measurements We iscuss applications such as system ientification, namely the estimation of the impulse response of a system, in cases where one can assume that the impulse response is high-imensional, but sparse We also consier the problem of etecting a change in the ynamic behavior of a system, where the change itself can be moele by a system with a sparse impulse response I INTRODUCTION We live in an era that one might call the century of ata explosion inee, we have now reache a point where all of the ata generate by intelligent sensors, igital cameras, an so on excees the worl s total ata storage capacity [] Motivate to reuce the burens of acquiring, transmitting, storing, an analyzing such vast quantities of ata, signal processing researchers have over the last few ecaes evelope a variety of techniques for ata compression an imensionality reuction Unfortunately, many of these techniques require a raw, high-imensional ata set to be acquire before its essential low-imensional structure can be ientifie, extracte, an exploite In contrast, what woul be truly esirable are sensors that require fewer raw measurements yet still capture the essential information in a ata set One positive outcome of this past work, however, has been the evelopment of sparse an compressible representations as concise moels for high-imensional ata sets Builing on these principles, Compressive Sensing CS has very recently emerge as a powerful paraigm for combining measurement with compression First introuce by Canès, Romberg an Tao [] [4], an Donoho [5], the CS problem can be viewe as recovery of an s-sparse signal a R n from just < n or even n observations y Xa R, where X R n is a matrix representing All authors are with Division of Engineering, Colorao School of Mines, Golen, CO 8040, USA {bmolazem,tvincent,mwakin}@mineseu This work was partially supporte by AFOSR Grant FA9550-09--0465, NSF Grant CCF-083030, DARPA Grant HR00-08--0078, NSF Grant CNS- 093748 an Department of Energy, Office of Energy Efficiency an Renewable Energy Grant DE-FG36-08GO8800 the linear measurement process An s-sparse signal a R n is a signal of length n with s < n nonzero significant entries The notation s : a 0 enotes the sparsity level of a Since the null space of X is non-trivial, exact recovery of the signal epens on whether the solution set for y Xa contains only one solution of the require sparseness It has been shown that if X is a fully populate ranom matrix with entries rawn from an inepenent an ientically istribute ii Gaussian ranom sequence, then exact recovery of a can be guarantee with high probability from O s log n s measurements [6] In aition, recovery is possible by solving a convex optimization problem an is robust to measurement noise [7] As an extension of CS, there is also recent work on the etection of sparse signals from compressive measurements [8] Concentration of measure inequalities are one of the leaing techniques use in the theoretical analysis of ranomize compressive operators [9] A typical concentration inequality takes the following form [0] For any fixe signal a R n, an a suitable ranom matrix X, the norm of the projecte signal by X will be highly concentrate aroun the norm of the original signal with high probability In other wors, there exist constants c an c, such that for any fixe a R n, P { Xa E [ Xa ] E [ Xa ]} c e cc0, where c 0 is a function of 0, In this paper, we erive concentration of measure inequalities for ranom, compressive Toeplitz matrices an iscuss applications to etection an system ientification A Compressive Toeplitz Matrices While the recovery results iscusse in the previous section are for a measurement matrix X with inepenent elements, many applications will require X to have a particular structure In particular, when ynamical systems are involve, the measurement process may involve the convolution of an unknown system impulse response with a known input Consier ientifying the impulse response of a linear time-invariant LTI system from its input-output observations Let {x k } n+ k be the applie input sequence to an LTI system characterize by its finite impulse response {a k } n k Then the corresponing output y is calculate from the time-omain convolution y x a Consiering the x k an a k sequences to be zero-pae from both sies, each

output sample y i can be written as n y i a j x i j j If we only keep observations of the system, {y k } n+ kn+, then can be written in a matrix-vector multiplication format as y Xa, where x n x n x x n+ x n x X x n+ x n+ x is a n Toeplitz matrix Supposing the system has an s-sparse impulse response, we are intereste in efficiently acquiring an retrieving a via its multiplication by the n compressive Toeplitz matrix X We call this a compressive Toeplitz matrix because < n In aition to the application to convolution measurement problems, it is worth noting that while fully populate ranom matrices require n elements to be generate, Toeplitz matrices require only n + istinct ranom entries, which may provie an avantage in other CS applications B Relate Results Compressive Toeplitz matrices an convolution have been previously stuie in the context of compressive sensing in [] [7], with applications incluing channel estimation an synthetic aperture raar The work in this paper is most closely relate to [7], which also erives a concentration of measure boun for compressive Toeplitz matrices an utilizes this boun to stuy recovery A comparison between our work an this work is given in the next section, with further iscussion of relate work postpone to later sections C Main Result In this paper, we erive a concentration of measure boun for compressive Toeplitz matrices as given in 3 with entries rawn from an ii Gaussian ranom sequence Our main result, etaile in Theorem, states that the upper an lower probability tail bouns epen on the number of measurements an on the eigenvalues of the measurement covariance matrix P P a efine as: R a 0 R a R a R a R a 0 R a P, 4 R a R a R a 0 where n τ R a τ : a i a i+τ enotes the un-normalize sample autocorrelation function of a R n 3 Theorem : Let a R n be fixe Define two quantities ρa an µa associate with the eigenvalues of the measurement covariance matrix P P a as: an ρa max i λ i a µa λ i a 4 where {λ i } are the eigenvalues of P Suppose X is a ranom compressive Toeplitz matrix with ii Gaussian entries having zero mean an unit variance Then for any 0,, the upper tail probability boun is P { y > a + } + e an the lower tail probability boun is P { y < a ] } e 3 + 5 6 ρa 7 µa 8 In [7], an upper an lower boun essentially ientical to 7 is given, but only for a range of boune away from 0 Our result 8 an proof technique appear to be new D Organization The rest of the paper is organize as follows In Section II, we state the proof of the main result In Section III, we provie bouns on the spectral norm of the covariance matrix P A system ientification application is consiere in Section IV In Section V, we explore the problem of etection with compressive sensing an present experimental results We conclue in Section VI II PROOF OF MAIN RESULT Our analysis begins with the observation that, for fixe a an ranom Gaussian X, y Xa will be a Gaussian ranom vector Lemma : If {x k } n+ k is a zero mean, unit variance ii Gaussian ranom sequence, then y Xa is a Gaussian ranom vector with zero mean an covariance matrix P P a given in 4 Proof: See Appenix A From Lemma, it quickly follows that E [ y ] a It is also easy to see that P is a symmetric Toeplitz matrix with tr P a where {λ i P } enote the eigenvalues of P Note that λ i P 0 for all i,,, The proof of the main theorem utilizes Markov s inequality along with a suitable boun on the moment generating function of y This boun epens on the following lemma Lemma : If y R is a Gaussian ranom vector with covariance matrix P, then E [ e ±ty y ] et I tp, 9 where we require t 0, max i λ i for the case involving et I tp, an y enotes the transpose of y

Proof: [ ] E e ±ty y e ±ty y e π et P y P y y e π et P y P tiy y P et ti et P et P ti et P et I tp Remark : As a special case of Lemma, if y R is a scalar Gaussian ranom variable of unit variance, then we obtain the well known result an E [ e ±ty] t 0 The main objective of this paper is to fin bouns on P { y > a + } a P { y < a } b We use Markov s inequality an Chernoff s bouning metho for computing the upper an lower probability tail bouns a an b Specifically, for a ranom variable Z, an all t > 0, P {Z > } P { e tz > e t} E [ e tz] e t see eg [8] Applying to a yiels [ ] P { y > a + } E e ty y 3 e a +t Using Lemma, we rewrite the right han sie of 3 as [ ] E e ty y e et I tp a +t e a +t 4 In 4, t > 0 is a free variable which can be varie to fin the tightest possible boun, resulting in the Chernoff boun approximation of the inequality Claim : The value of t that minimizes 4 satisfies λ i a tλ + 5 i Proof: See Appenix B Unfortunately, 5 is ifficult to use for further analysis However, since 4 is vali for all t, we are free to choose a suboptimal, but still useful value We propose to use t + fa a, where f is a function of a that will be chosen later in a way that leas to tighter bouns Now we are reay to prove the main theorem We state the upper tail probability boun in Lemma 3 an the lower tail probability boun in Lemma 4 Lemma 3: Let y Xa R be a zero mean Gaussian ranom vector with covariance matrix P, where X is a n compressive Toeplitz matrix populate with ii zero mean, unit variance Gaussian ranom variables, an a is an s-sparse vector of length n Then, for any 0,, P { y > a + } + e ρa 6 Proof: From 3 an 4, we get P { y > a + } et I tp e a +t 7 Choosing t as t + ρ a a, 8 the right han sie of 7 can be written as et I P + ρ a a e ρa 9 This expression can be simplifie by applying the previous lemmas Note that P λ i et I + ρ a a + ρ a a e log λ i + ρa a Using the fact that log c c c log c for any c, c [0, ] an tr P a, we have e log + λ i ρa a e e trp ρa a λ i ρa a log + e log ρa + + Combining 7, 9, an 0 gives us P { y > a + } + log + ρa 0 ρa e + e ρa ρa Lemma 4: Using the same assumptions as in Lemma 3, for any 0,, P { y < a } e 3 µa +

Proof: obtain Applying Markov s inequality to b, we P { y < a } P { y > a } [ ] E e ty y e a t Using Lemma, this implies P { y < a } et I + tp e a t In this case, we choose t + µa a Plugging t into an following similar steps as for the upper tail boun, we get P eti + tp et I + + µa a λ i + + µa a e + log λ i + µa a 3 Since log + c c c for c > 0, λ i log + + µa a λ i + µa a λ i + µa a Thus, λ i log + + µa a λ i + µa a λ i + µa a λ i + µa a + µa a + µa + µa + µa + Using this in 3 gives the boun eti + tp e + µa + e + µa + 4 λ i 095 09 085 08 075 07 f upper ε f lower ε 065 0 0 04 06 08 ε Fig Comparison between upper an lower tail boun functions By substituting 4 in, we obtain P { y < a } e + e + e 3 µa + µa III CONCENTRATION BOUNDS FOR SPARSE SIGNALS As can be seen the statement of Theorem, the upper an lower tail probability bouns are functions of the quantities ρ a an µ a For any 0,, we have the upper tail probability boun P { y > a + } f upper an the lower tail probability boun where P { y < a } f lower f upper + e ρa µa, an f lower e 3 + Figure shows a comparison between f upper an f lower for 0, These functions are similar, an behave as e 4 for small Typically, we are seeking to fin values of for which the boun is small for a particular class of signals, which in our case will be s-sparse signals Consequently, it is necessary to obtain an upper boun for the quantities ρ a an µ a for the class of signals of interest It is easy to show that for all a, µ a ρ a Therefore, we limit our problem to fining the sharpest boun for ρ a Note that P is a symmetric Toeplitz matrix which can be ecompose as: P A A,

where A is an n + Toeplitz matrix forme by a, a 0 0 0 a n a A 0 0, a n a 0 0 0 a n an A is the transpose of A By the Cauchy Interlacing Theorem [9], max λ i P max λ i A A max λ i à Ã, where à is an n + n + circulant matrix with ã [a, a,, a n, 0,, 0], as its first row Since λ i à à λ i Ã, an upper boun for ρ a is provie by the maximum eigenvalue of à Since à is circulant, λ i à simply equals the un-normalize n + -length Discrete Fourier Transform DFT of the first row of à As a result, λ i à n k a k e jπi k/n+ 5 When a is an s-sparse signal, only s < n + terms in this summation are nonzero, therefore λ i à s a This results in the boun ρ a s Remark : Although µa ρa s for all s-sparse signals, this appears to be a highly pessimistic boun for most signals Numerical experiments have shown that for Gaussian ranom s-sparse signals, E [ρa] logs However, it remains to be seen if this non-uniformity can be exploite IV SYSTEM IDENTIFICATION / TOEPLITZ RECOVERY In this section, we aress problem of recovering a sparse signal after convolution We seek to fin a number of measurements of an s-sparse signal a using a ranom, compressive Toeplitz measurement matrix X that are sufficient to ensure that a can be recovere with high probability A Relate Work Consiering structure in the measurement matrix X was aresse by Tropp et al, as one of the early papers which consiere reconstruction of a signal from its convolution with a fixe Finite Impulse Response FIR filter having ranom taps [] Bajwa et al [] stuie Toeplitzstructure compresse sensing matrices with applications to sparse channel estimation They allowe the entries of the measurement matrix be rawn from a symmetric Bernoulli istribution Later they extene this to ranom matrices whose entries are boune or Gaussian-istribute It is shown that O s log n measurements are sufficient in orer to have exact signal recovery [4], [5], which appears to be the best result to ate This compares to O s log n/s measurements when X is unstructure, see eg [0] Rauhut consiere circulant an Toeplitz matrices whose entries are inepenent Bernoulli ± ranom variables Furthermore, they constraine the signs of the non-zero entries of the s-sparse signal to be rawn from a Bernoulli istribution as well Consiering this configuration, they propose that the require number of measurements for exact recovery scales linearly with sparsity up to a log-factor of the signal length [6] Xiang et al propose convolution with a white noise waveform followe by eterministic subsampling as a framework for CS Imposing a uniformly ranom sign pattern on the non-zero entries of the signal, they require O s log 5 n/δ measurements to have exact recovery with probability exceeing δ [7] B Recovery Conition: The Restricte Isometry Property Introuce by Canès an Tao [], the Restricte Isometry Property RIP is an isometry conition on measurement matrices use in CS Definition : X satisfies the RIP of orer s if there exists a δ s 0, such that δ s a Xa + δ s a 6 hols for all s-sparse signals a Given a matrix X R n an any set T of column inices, we enote by X T the T submatrix of X compose of these columns Equation 6 can be interprete as a requirement that all of the eigenvalues of the Grammian matrix X T X T lie in the range [ δ s, + δ s ] for all sets T with T s When the RIP is satisfie of orer s with δ s <, then correct recovery of an s-sparse signal can be obtaine by fining the minimizer of the following optimization problem [7]: min a a R n subject to y Aa In [7], it is also shown that robust recovery is possible in the presence of measurement noise Verifying the RIP is generally a ifficult task This property requires boune conition number for all submatrices built by selecting s T arbitrary columns Therefore, there are n s such submatrices to check, an computing the spectral norm of a matrix is not generally an easy task However, for certain ranom constructions of X, the RIP follows in a simple way from concentration of measure inequalities An important result in this vein concerns the n matrix X whose entries x i,j are inepenent realizations of Gaussian ranom variables, x i,j N 0, All Gaussian ranom variables mentione in this section will have this same mean an variance Note that in this realization of X, there are n fully inepenent Gaussian ranom entries

{x k } n+ k {a k } n k {z k } n+ kn+ + Fig FIR filter with impulse response a k y Xa + z We call such realizations simply Unstructure X comparing to Toeplitz X which have the form of 3 In previous work, a concentration of measure inequality has been use to show that Unstructure X satisfy the RIP with high probability The relevant concentration of measure inequality for Unstructure X is as follows see eg [0] For any 0, P { Xa a } a e c 0 7 with c 0 4 3 6 Base on this this concentration inequality, the following result was proven in [0] Theorem : An Unstructure X with ii Gaussian entries satisfies the RIP of orer s with high probability for O s log n s An approach ientical to the one taken in [0] can be use to establish the RIP for Toeplitz X base on the concentration of measure inequalities given in Theorem In particular, some basic bouning gives us the following result for Toeplitz X For any 0,, P { Xa a } a e ρp c0 8 Comparing inequality 8 with 7 inicates that these concentration inequalities iffer only by the factor ρa Since for sparse signals, ρa is boune by s, we have the following result, whose proof is omitte for space Theorem 3: A Toeplitz X with ii Gaussian entries satisfies the RIP of orer s with high probability if O s log n s Note that this result is essentially ientical to the bouns given previously in [4], [5] However, given the extremely non-uniform istribution of ρa over the set of all s-sparse signals a, it may be that a moifie approach to the RIP conition will allow the recovery results to be tightene In particular, we note that if the sign pattern for the non-zero entries of the signal a is chosen to be ranom, ρ a logs This remains an area of current research V DETECTION WITH COMPRESSIVE SENSING A Problem Setup In another application, we consier solving a CS etection problem Consier an FIR filter with impulse response {a k } n k The response of this filter to a test signal x k is as escribe in Moreover, suppose the observations are corrupte by a ranom aitive measurement noise z Figure shows the schematic of this filter Now a etection problem can be consiere as follows: the impulse response a k is known, but we wish to etect whether the ynamics of the system change to b k, where the ifference c k b k a k is known The response {y k } n+ kn+ to a known input x k is monitore an the goal is to etect when the impulse response of the system changes using < n measurements Since the the nominal impulse response is known, the expecte response Xa can be subtracte off, an thus without loss of generality, we can consier a 0 The etection problem can be formulate as follows [8] Define two events E 0 an E as: E 0 y z E y Xc + z where z is a vector of ii Gaussian noise However, ue to the presence of ranom noise, the algorithm can result in false etection This leas to efining the probabilities P F A an P D as: P F A P {E chosen when E 0 } P D P {E chosen when E } where P F A enotes the false-alarm probability an P D enotes the etection probability A Receiver Operating Curve ROC is a plot of P D as a function of P F A A Neyman- Pearson NP etector maximizes P D for a given limit on failure probability, P F A α The NP test is base on the likelihoo ratio which in our case can be written as y Xc E E 0 γ 9 where the threshol γ is chosen to meet the constraint P F A α Consequently, we consier the compressive etector t as t : y Xc 30 By evaluating t an consiering the threshol γ, we are now able to istinguish between the occurrence of two events, E 0 an E To fix the failure limit, we set P F A α which leas to: P D α Q Q α Xc 3 σ where Qq π q e u u 3 Since P D α is a function of Xc, we postulate that the ultimate performance of the etector will epen on µc, ρc an consequently on P which is the sample covariance matrix for c Furthermore, this epenence woul not occur if the measurement utilize an Unstructure measurement matrix which, of course, woul not apply to the FIR filter measurement consiere here but is a useful comparison to the structure measurement impose by convolution B Experiments an ROCs In these experiments, we test ifferent signals c with ifferent ρ c values Several signal classes are esigne base on the location, sign an the value of their non-zero elements For a given fixe signal c, 000 Unstructure an Toeplitz X matrices are generate For each X, a curve of P D over P F A is compute using 3 Figure 3 shows the

Unstructure X Toeplitz X PD 05 09 08 PD 0 0 0 0 03 04 05 06 07 08 09 α Toeplitz X 05 0 0 0 0 03 04 05 06 07 08 09 α PD 07 06 05 ρ 5, µ 668 04 ρ 50, µ 3334 ρ 3 75, µ 3 5000 03 ρ 4 00, µ 4 6667 ρ 5 50, µ 5 0000 ρ 6 00, µ 6 3348 0 0 0 04 α 06 08 Fig 3 ROCs for 000 X Unstructure an Toeplitz matrices for a fixe signal c ρc 50 The soli black curve is the average of 000 curves Fig 5 Average ROCs with 000 Toeplitz X for 6 ifferent signals c The curves escen the same orer they appear in the legen box result for a special class of signals whose non-zero elements are put in a block position in front with same value an sign In this experiment signal length is n 50 an number of measurements 5 As can be seen, the ROCs associate with Toeplitz X are more scattere comparing to the ROCs when we apply Unstructure X This result is ue to the weaker concentration of Xc for Toeplitz X In orer to compare the ROCs of ifferent signals with ifferent ρ c values when projecte by ranom n Toeplitz matrices, we esign an experiment with 6 ifferent signals Figures 4 an 5 show how these signals get treate by Unstructure an Toeplitz X matrices PD 09 08 07 06 05 04 Unstructure X ρ 5, µ 668 ρ 50, µ 3334 ρ 3 75, µ 3 5000 ρ 4 00, µ 4 6667 ρ 5 50, µ 5 0000 ρ 6 00, µ 6 3348 0 0 04 06 08 α Fig 4 Average ROCs with 000 Unstructure X for 6 ifferent signals c Note that all curves are overlapping As can be seen, the average ROCs o not change with respect to ifferent signals when we apply Unstructure X However, they change when applying Toeplitz X More importantly, it can be conclue that ρc has a irect influence on these curves Generally signals with higher spectral norm ρc have lower ROCs, although this relation is not firm In all of these experiments, we fix c an let the noise stanar eviation σ 03 VI CONCLUSION Concentration of measure inequalities for a compressive Toeplitz matrix are erive in this work We have consiere two important quantities ρa an µa associate with the eigenvalues of the covariance matrix P Two applications were aresse for these types of matrices A Toeplitz recovery application was consiere where we want to recover an s-sparse signal a from its compressive measurements via y Xa, where X is a ranom, compressive Toeplitz matrix We showe that for Toeplitz X, the require number of measurements shoul scale with sρ a for X to satisfy RIP of orer s for all s-sparse signals a In another application, a CS etection problem is consiere Experimental results show that signals with ifferent ρ a values have ifferent ROCs when measure by a Toeplitz matrix ACKNOWLEDGMENT The authors gratefully acknowlege Chris Rozell, Han Lun Yap, Alejanro Weinstein an Luis Tenorio for helpful conversations uring the evelopment of this work A Proof of Lemma APPENDIX Proof: It is easy to show that y is a zero mean vector, since E [y] E [Xa] E [X] a 0 The covariance matrix P associate with y can be calculate as follows: E [y y ] E [y y ] E [y y ] P E [yy E [y y ] E [y y ] E [y y ] ] E [y y ] E [y y ] E [y y ]

Now consier one of the elements eg E [y l y m ] Without loss of generality, assume m < l Then E [y l y m ] a E x n+l x n+l x l [ ] xn+m x n+m x m a 0 0 0 0 a 0 0 0 0 0 0 a a 0 0 0 n l m 0 0 a a a n l m a i a i+l m R a l m, where the ones start at row l m + Clearly, with E [y l y m ] R a l m, the covariance matrix P has the form as in 4 which completes the proof Note that the covariance matrix is a symmetric Toeplitz matrix B Proof of Claim Proof: We start by taking the erivative of 4 with respect to t an then putting it equal to zero Using Jacobi s formula for an invertible matrix A, eta A etatr A, α α we have et I tp e +t a t et I tp e a +t et I tp tr I tp P a + e a +t et I tp et I tp e a +t tr I tp P a + Now setting the above equation equal to zero, we get tr I tp P a + Knowing that P is a symmetric matrix, we can ecompose it using the Singular Value Decomposition SVD P UΣU T, where U is an orthogonal matrix an Σ is a iagonal matrix Now for 0 < t < 05, we get tr I tp P tr U I tσ U T UΣU T tr tr UU T tuσu T UΣU T I tσ Σ From here, we get the following polynomial equation: λ i tλ i a + where λ i s are the eigenvalues of the P matrix which lie along the iagonal of Σ REFERENCES [] Data, ata everywhere, The Economist, 5 Feb 00 [] E Canès, Compressive sampling, in Proceeings of the International Congress of Mathematicians, vol, no 3 Citeseer, 006 [3] E Canès an T Tao, Near-optimal signal recovery from ranom projections: Universal encoing strategies? IEEE Transactions on Information Theory, vol 5, no, pp 5406 545, 006 [4] E Canès, J Romberg, an T Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on information theory, vol 5, no, pp 489 509, 006 [5] D Donoho, Compresse sensing, IEEE Transactions on Information Theory, vol 5, no 4, pp 89 306, 006 [6] E Canès an M Wakin, An introuction to compressive sampling, IEEE Signal Processing Magazine, vol 5, no, pp 30, 008 [7] E Canès, The restricte isometry property an its implications for compresse sensing, Comptes renus-mathématique, vol 346, no 9-0, pp 589 59, 008 [8] M A Davenport, P T Boufounos, M B Wakin, an R G Baraniuk, Signal processing with compressive measurements, IEEE Journal of Selecte Topics in Signal Processing, vol 4, no, pp 445 460, 00 [9] M Leoux, The concentration of measure phenomenon Amer Mathematical Society, 00 [0] D Achlioptas, Database-frienly ranom projections: Johnsonlinenstrauss with binary coins, Journal of Computer an System Sciences, vol 66, no 4, pp 67 687, 003 [] J Tropp, M Wakin, M Duarte, D Baron, an R Baraniuk, Ranom filters for compressive sampling an reconstruction, in Proc Int Conf Acoustics, Speech, Signal Processing ICASSP, 006 [] W Bajwa, J Haupt, G Raz, S Wright, an R Nowak, Toeplitzstructure compresse sensing matrices, in IEEE/SP 4th Workshop on Statistical Signal Processing, 007 SSP 07, 007, pp 94 98 [3] J Romberg, Compressive sensing by ranom convolution, SIAM Journal on Imaging Sciences, vol, no 4, pp 098 8, 009 [4] W Bajwa, J Haupt, G Raz, an R Nowak, Compresse channel sensing, in Proc 4n Annu Conf Information Sciences an Systems CISS08, 008, pp 5 0 [5] J Haupt, W Bajwa, G Raz, an R Nowak, Toeplitz compresse sensing matrices with applications to sparse channel estimation, IEEE Trans Inform Theory, 008 [6] H Rauhut, Circulant an Toeplitz matrices in compresse sensing, Proc SPARS, vol 9, 009 [7] Y Xiang, L Li, an F Li, Compressive sensing by white ranom convolution, 009 [Online] Available: http://wwwcitebaseorg/abstract?ioai:arxivorg:0909737 [8] G Lugosi, Concentration-of-measure inequalities, Lecture notes, 004 [9] S Hwang, Cauchy s Interlace Theorem for Eigenvalues of Hermitian Matrices, The American Mathematical Monthly, vol, no, pp 57 59, 004 [0] R Baraniuk, M Davenport, R DeVore, an M Wakin, A simple proof of the restricte isometry property for ranom matrices, Constructive Approximation, vol 8, no 3, pp 53 63, 008 [] E Canès an T Tao, Decoing via linear programming, IEEE Trans Inform Theory, vol 5, no, pp 403 45, 005