Distributed Estimation in Large Wireless Sensor Networks via a Locally Optimum Approach

Size: px
Start display at page:

Download "Distributed Estimation in Large Wireless Sensor Networks via a Locally Optimum Approach"

Transcription

1 SUBMITTED TO IEEE TRANS. ON SIGNAL PROCESSING ON NOVEMBER 5, 006. REVISED ON APRIL 007 Distributed Estimation in Large Wireless Sensor Networks via a Locally Optimum Approach Stefano Marano, Vincenzo Matta, Peter Willett, Fellow, IEEE, Abstract A Wireless Sensor Network (WSN) engaged in a decentralized estimation problem is considered. The nonrandom unknown parameter lies in some small neighborhood of a nominal value and, exploiting this knowledge, a Locally Optimum Estimator (LOE) is introduced. Under the LOE paradigm the sensors of the network process their observations by means of a suitable nonlinearity (the score function), before delivering data to the fusion center that outputs the final estimate. Usually continuous-valued data cannot be reliably delivered from sensors to the fusion center, and some form of data compression is necessary. Accordingly, we design the scalar quantizers that must be used at the network s nodes in order to comply with the estimation problem at hand. Such a difficult multiterminal inference problem is shown to be asymptotically equivalent to the already solved problem of designing optimum quantizers for reconstruction (as opposed to inference) purposes. Index Terms Distributed Estimation, Scoring method, Wireless Sensor Networks, Data fusion. I. INTRODUCTION THE problem of estimating a nonrandom parameter θ by fusing independent and identically distributed observations collected at the n remote nodes of a noncooperative Wireless Sensor Network (WSN) is addressed. See [] [7], and therein references for an introduction to sensor networks and applications. We consider a parallel architecture where all the sensors deliver quantized data to a common Fusion Center (FC) that provides the final estimate θ. This latter is obtained by fusing compressed versions of the remote observations, thus accounting for the limited capacities of the channels connecting nodes and FC. We focus on the tradeoffs between the resulting rate of data transmission and quality of inference, the latter being measured in terms of the achievable estimation Mean Square Error (MSE). We work in the limit where the unknown θ approaches a nominal value θ 0. The idea is that one can have only the rough information about the unknown parameter that it lies in the proximity of the nominal value. But there is insufficient evidence for selecting an a-priori density for the parameter, as would be standard in a Bayesian framework, neither we can define exactly the width of the parameter interval. Rather, we implement a sort of local Maximum Likelihood (ML) estimator, whose canonical structure and asymptotic properties are exploited to design the quantizers. S. Marano and V. Matta are with the Department of Information and Electrical Engineering (DIIIE), University of Salerno, via Ponte don Melillo 84084, Fisciano (SA), Italy. s: {marano, vmatta}@unisa.it. P. Willett is with ECE Department, U-57, University of Connecticut, Storrs, CT 0669 USA. willett@engr.uconn.edu. Part of this work has been presented at EUSIPCO 006, Florence, Italy, September 4-8, 006. A. Related Work The estimators considered in this paper are somewhat related to the well-known method of scoring, see e.g., [8] and references therein. Our approach is further related to the asymptotic analysis dealt with in a series of papers by Hàjek and Le Cam (see e.g., [9] []), introduced to overcome some pathologies of the asymptotic efficiency, as originally formulated by Fisher [3], see [4] and reference therein. Also relevant are tools and methods borrowed from the field of locally optimum detection, see e.g. [5, Sect. II, chap. 3]. A Locally Optimum Estimator (LOE) has been also previously introduced in the work by Lee and Longley [6], using a discrete observation model where the data result from a uniform quantization of some continuous source. The quantizer structure is given and it is not a matter of optimization. The locally optimum approach of [6] is defined in terms of θ tending to a nominal value, while the regime of increasingly large n is not considered, nor is any asymptotic optimality proven. A Bayesian version of a locally optimum estimator is addressed in [7]. Our focus is on quantizer optimization for decentralized (or multiterminal) inference, and this topic is extensively addressed in the literature. Several solutions have been proposed for the case that the parameter to be estimated is random: in [8] the authors use a cyclic version of Lloyd & Max algorithm for optimizing the quantizers, see also [9]. Still for a random θ, in [0] quantizers for minimizing the MSE are considered for certain classes of estimators, while an optimization of the (Bayesian) Fisher information provided by uniform quantizers is proposed in []. In [], the optimal compandor for nonuniform scalar quantizers, in the high resolution regime, is derived. As to the case of nonrandom θ, in [3] and [4] a rate constrained optimization of quantizers is considered. An interesting point raised by the authors is that, in certain cases, minimizing the reproduction error with respect to the score function is the correct approach, and we will see that the same happens in our locally optimum setup. Information theoretic results for the multiterminal inference under communication constraints are collected in [5], where infinite observations per node are allowed. Finally, although focused on detection issues, it is worth mentioning the review on decentralized detection with multiple sensors in [6] and [7], and the distributed detection in sensor networks considered in [8]. Our approach is influenced by the work by Kassam [9, chap. 4], [30] where optimum quantization for locally optimum detection is addressed.

2 SUBMITTED TO IEEE TRANS. ON SIGNAL PROCESSING ON NOVEMBER 5, 006. REVISED ON APRIL 007 B. Main Results and Paper Organization If the nominal point θ 0 were close to correct, then the LOE would be seen as a one-step correction estimator based on the method of scoring [8], whose asymptotic performance is assessed by classic large-sample statistics [3]. Instead, in this work we assume that θ 0 is simply a nominal point and not a properly defined previous estimate of the unknown parameter. Then, we accordingly derive a local ML estimator, the LOE, and investigate its properties in an asymptotic setting where the unknown θ approaches θ 0 at rate / n, a framework borrowed from the locally optimum detection theory [9]. Due to the above / n-scaling, most of the tools in the asymptotic development mirror those of the asymptotic statistics framework by Hàjek and Le Cam [4]. For the case of quantized observations, the asymptotic properties of the LOE are then exploited to design the quantizers that the network s nodes must employ, to comply with the peculiarities of the inference task. This basically amounts to (i) apply to the raw data a nonlinearity given by the score function (evaluated at the nominal point θ = 0), and (ii) compress the output of the nonlinearity by exploiting the standard Lloyd & Max quantizer, again working at the nominal point. Hence, we provide a way to tackle the difficult problem of rate distortion in multiterminal inference scenarios, by standard and well-known single-terminal rate distortion procedures such as the Lloyd & Max algorithm. This is perhaps the main contribution of the paper. Further capitalizing on the exact formulas found for the system performances, we perform an error analysis aimed to identify the θ-range wherein the locally optimum inferenceoriented design is effective. We also compare the proposed scheme to a system that employs clairvoyant quantizers (they know θ) but are designed for reproducing the sensors observations, rather than recovering the embedded parameter θ. As to the functional form, the LOE additively combines suitable nonlinearly-transformed versions of the nodes observations. Its structure is canonical in the sense that the underlying statistical distribution of the data only impacts the shape of the optimal nonlinearity, but leaves otherwise unchanged the structure of the data processing. This also embodies scalable properties which are particularly suited for WSNs: adding one more node does not modify the other quantizers, and the system design can be extended to arbitrarily large networks without an increase of the optimization complexity. The additive structure of the estimator suggests its use when the communication medium is an additive multiple access channel, and a joint source channel approach could be pursued, see [3] and [33]. Furthermore, the additive form may be desirable in particular communication/estimation schemes without a fusion center, as those considered in [34]. All the results are derived assuming identically distributed data across nodes, one single observation per sensor, and a scalar θ. Note that also conceivable are iterative procedures, such as the standard Newton-Raphson algorithm, to compute the ML estimator. However, here we assume that the nature of the network hinders multiple transmissions between sensors and FC, so that iterative methods are not allowed. The remainder of this paper is organized as follows. Section II introduces the proposed estimation strategy and discusses its main asymptotic property. In practice, only a finite number of bits can be transmitted from sensors to the FC, thus leading to the rate optimization problem discussed in Sect. III. Sect. IV presents examples of applications and comparisons with different approaches. A summary of the main results is provided in Sect. V. A. Preliminaries II. LOCALLY OPTIMUM ESTIMATION We wish to estimate a deterministic parameter θ from the data vector x = [x, x,..., x n ], where the x i, i =,,... n, are independent and identically distributed (henceforth iid) continuous random variables sharing the common probability density function (pdf) f θ (x), which is parametrized by the unknown θ. The physical interpretation is that observation x i is collected at the i th sensor of the n-node network, and these observations (or some compressed version thereof) are transmitted to the fusion center that finally outputs the estimate θ. The unknown θ is in a neighborhood of θ 0 and, without loss of generality, we hereafter assume θ 0 = 0. We shall work in the limit where θ 0, and the data distribution evaluated at θ = 0, say f 0 (x), is here referred to as the nominal pdf, as opposed to the actual probability density f θ (x). Consider the following log-likelihood ratio L n (x) = log f θ(x) f 0 (x) = n log f θ(x i ) f 0 (x i ), () formed by the actual density f θ (x) against the nominal f 0 (x). This can be Taylor-expanded around θ = 0 yielding L n (x) = + log f θ (x i ) θ log f θ (x i ) θ θ θ + o(θ ). () As exhaustively discussed in [9], [9], [35] for the locally optimum detection problem, it makes sense to consider an increasingly large value of n, along with a vanishingly small θ. For our estimation purposes, the practical significance of n is that improving on the trivial estimate θ = 0 (which is independent of the sensors observations) necessarily requires high-performing estimates that can be obtained by a relatively large sample size 3. Borrowing a standard approach from the theory of locally optimum detection [9], let us define γ = θ n, so that when n increases without bound and γ is arbitrary but fixed, θ approaches 0 at a prescribed rate. Introducing γ in eq. () We assume that the support of f θ (x) does not depend upon θ. 3 Note that as the number of sensors n increases, we assume that the iid property of the observations is retained, namely the network becomes larger but not dense.

3 MARANO et al.: DISTRIBUTED ESTIMATION IN LARGE WIRELESS SENSOR NETWORKS VIA A LOCALLY OPTIMUM APPROACH 3 and neglecting higher-order terms gives log f θ (x i ) L n (x) γ θ n log f θ (x i ) + γ θ n. (3) B. LOE and its asymptotic properties The so-called score of the random variable x drawn from f θ (x) is log f θ (x)/ θ. Computing that at θ = 0 yields g(x) = log f θ(x) θ = f 0(x) f 0 (x), (4) where the f θ 0 (x) denotes derivative with respect to θ, evaluated at θ = θ 0. Capitalizing on eqs. (3) and (4), we define the LOE as follows (see eq. (7) in [6], see also [8, p. 77]): θ LOE (x) def = ni(0) g(x i ). (5) The connections of such a definition with expression (3) will become clear shortly. Let us define [ ( ) ] log fθ (x) I(0) = f 0 (x) dx, θ that is the Fisher information per sample computed under the nominal density f 0 (x). The main property of the LOE can be now stated as: ) ( fθ n ( θloe (x) θ N 0, ). (6) I(0) In the above f θ N (a, b) means that, under f θ (x), the LHS converges in distribution to a Gaussian with mean a and variance b, when n and θ = γ/ n (γ fixed). To elaborate, let n and assume that f 0 (x) is the true distribution of the data. Under these assumptions, the second term on the RHS of eq. (3) converges in probability to 4 γ I(0)/. By substituting that in eq. (3) we get L n (x) log f θ (x i ) θ θ θ ni(0), whose derivative with respect to θ is zero at θ = θ LOE (x). This reveals that the LOE is nothing but an ML estimator when the log-likelihood is approximated around the nominal value θ = 0 [8], [6]. This also reveals that the LOE is obtained as solution of n ψ(x i, θ) = 0,, where ψ(x i, θ) = g(x i )/I(0) θ and g(x) is defined in eq. (4). The above expression implies that the LOE falls in the class of the M- estimators introduced by Huber [36], with a specific form of ψ that gives to the estimator desirable properties. It is also worth noting that, asymptotically, E θ [ θ LOE (x) θ] 0 and VAR θ [ θ LOE (x)] [ni(0)], where E θ and 4 This is due to the convergence of the arithmetic to the statistical expectation, and to the well-known equality E θ [ ( log fθ (x)/ θ) ] = E θ [ log f θ (x)/ θ ]. VAR θ represent statistical expectation and variance under distribution f θ (x). Therefore, if the nominal Fisher information I(0) in eq. (6) is replaced by the actual I(θ) (that computed under f θ (x)) then the above claim simply states that θ LOE (x) is asymptotically efficient, in the usual Cramér-Rao sense [7]. Furthermore, we would like to emphasize the canonical structure of the LOE. As seen from eq. (5), θ(x) additively combines a suitable transformation g(x i ) of individual sensors observations x i. In general, the ML estimator as well as many other commonly used estimators are far from being in a form that decouples individual observations. One main feature of the LOE is that, regardless of the first-order statistical distribution of the sensors observations, θ LOE (x) can be computed at the fusion center by using the data separately delivered by individual nodes of the network. Remarkably, the underlying data distribution influences the shape of g( ), but not the structure of the estimator in eq. (5). Therefore, the role of g( ) is that of an optimal nonlinearity, namely the transformation of the data that must be used for building θ LOE (x). The function g(x) is the nonlinearity that each sensor of the network should employ to process its observation, before sending that to the fusion center. Let us now provide justification to eq. (6). We would like to stress that the following development could be directly obtained from standard results in asymptotic statistics [4], and we include here a brief sketch for self-consistency. Let n. As seen earlier, under f 0 (x) the second term on the RHS of eq. (3) converges in probability to γ I(0)/. On the other hand, under f 0 (x) the first term on the RHS can be easily recognized to be Locally Asymptotically Normal (LAN) [9], with zero mean value and variance γ I(0). Thus, asymptotically with n, the following convergence in distribution holds ( ) f 0 L n N γ I(0), γ I(0). (7) To get the statistical characterization of the log-likelihood under the true distribution f θ (x), we resort to the key-concept of contiguity introduced by Le Cam [9]. The formal definition is rather involved but, fortunately, for our purposes the following basic result suffices. Fundamental theorem on the use of contiguity [35]. Given two families of distribution functions {w (n) (x)} n= and {z (n) (x)} n=, assume that the sequence of the associated likelihood ratios {w (n) (x)/z (n) (x)} n= converges under z (n) (x) to a Gaussian distribution with mean µ and variance σ. Then contiguity holds and under w (n) (x) the likelihood sequence converges to a Gaussian with mean µ + σ and variance σ. Exploiting this result and eq. (7), we conclude that ( ) f θ γ L n N I(0), γ I(0). (8) Now, combining the definition of the LOE estimator with the likelihood in eq. (3) we get 5 n ( θloe (x) θ) = L n(x) γi(0) 5 In eq. (9) we neglect any term which can be safely assumed vanishing (in probability) for large n. To this aim, we again assume that appropriate technical conditions are fulfilled [37].

4 4 SUBMITTED TO IEEE TRANS. ON SIGNAL PROCESSING ON NOVEMBER 5, 006. REVISED ON APRIL 007 log f θ (x i ) θ γ γ. (9) ni(0) On the RHS of eq. (9) it is recognized that (i) the first term converges in distribution to a Gaussian with mean γ/ and variance I (0), and (ii) the second term goes in probability to γ/. The claim of eq. (6) now follows as a direct application of Slutsky s theorem [3]. III. QUANTIZER DESIGN IN WSNS In many WSN scenarios, the transmission of the continuous valued quantities g(x i ), as prescribed by eq. (5), is inhibited by the finite capacity of the communication links between remote nodes and FC. Then, the nodes of the network should quantize the g(x i ) s, before transmission. In some WSN applications the quantization may also be rather coarse, in the sense that very few bits must be used. In the following we assume that each sensor employs a scalar quantizer Q, and that these quantizers are identical across the nodes, for simplicity and with an appeal to symmetry. Let q i = Q(x i ) be the discrete valued data to be delivered to the FC, and let p θ (q) be the associated probability mass function, the discrete counterpart of density f θ (x): p θ (q) = f θ (x)dx, with R q being the partition region yielding q as output. We now parallel the development of the previous section in this discrete setting. Let c q = log p θ(q) θ = = p 0(q) p 0 (q) = f x R 0 (x) q f 0 (x) f 0(x)dx = f 0 (x)dx f 0(x)dx f 0 (x)dx g(x)f 0 (x)dx f 0 (x)dx (0) be the score of the quantized variable q, evaluated at θ = 0. We define the LOE from quantized sensors outputs as θ LOE,q = c qi, () ni q (0) where I q (0) = q [ ( ) ] log pθ (q) p θ (q) θ () is the Fisher information per sample for the quantized case. In the same asymptotic setting formalized in the previous section (i.e., γ = θ n, with n, and γ held fixed), the following statement can be proved. ) ( ) fθ n ( θloe,q (q) θ N 0,, (3) I q (0) whose derivation goes exactly as in the continuous case and is omitted. The LOE estimator in eq. () is the sample mean of the quantities ξ i = c qi /I q (0). Accordingly, its mean square error is = v(θ) n + b (θ), (4) where v(θ) = VAR θ [ξ], and b(θ) = E θ [ξ θ] is the bias term. The limiting behaviors of v(θ) and b(θ) around the nominal point are easily derived from the asymptotic properties of the LOE: the above result implies that the asymptotic mean and variance of n( θ LOE,q γ/ n) are 0 and /I q (0), respectively 6. This implies, lim n v (γ/ n) = /I q (0) and lim n nb (γ/ n) = 0. These expressions can be rephrased as lim v(θ) = θ 0 I q (0), lim b (θ) θ 0 θ = 0. (5) From eqs. (4) and (5) we conclude that if θ goes to zero, for a fixed n, then [ni q (0)]. Conversely, for a fixed θ, in the limit of n the bias term dominates and b (θ). We elaborate on these separate limits in the section devoted to the applications. Coming back to the asymptotic setting where γ = θ n is fixed, we are now in the position of designing the quantizers. The problem can be conveniently formalized in terms of a classical rate distortion problem. This consists of selecting the lowest allowable rate R of information transmission (in our case, the number of bits of the quantizer), which is compatible with a given distortion level D (we measure that in terms of estimation MSE). Proposition. In the class of the LOE estimators, the optimization problem min, (6) Q:log Q R where is the estimation mean square error using the quantizer Q, is asymptotically equivalent to min ɛ(g(x), c q), (7) Q:log Q R where g(x) is the (continuous) optimal nonlinearity given in eq. (4), and ɛ(g(x), c q ) = f 0 (x)(g(x) c q ) dx. A key point should be emphasized. As revealed by eq. (0), the score c q is the MMSE estimate, given q, of the optimal nonlinearity g(x), computed with respect to pdf f 0 (x). In other words, c q is a quantized version of the optimal (unquantized) nonlinearity. Thus, eq. (7) represents a classical optimization problem in the context of quantization for reproduction purposes: you have a continuous quantity g(x) and c q is its scalar quantized version. The goal is to minimize the reproduction mean square error E[(g(x) c q ) ] between the original g(x) and the quantized counterpart c q, where the expectation is with respect to f 0 (x). Such an optimization problem, hence, can be solved by means of a standard Lloyd & Max algorithm [38], which provides us with the best quantizer achieving the minimum reproduction error, subject to a constraint on the number of bits. Basically, we have reduced the difficult and generally unsolved problem of optimal quantizer design for multiterminal 6 Although the kind of convergence considered in Proposition does not ensure in general the convergence of individual moments (see, e.g. [3]), these convergences are usually met in practical cases [9].

5 MARANO et al.: DISTRIBUTED ESTIMATION IN LARGE WIRELESS SENSOR NETWORKS VIA A LOCALLY OPTIMUM APPROACH 5 inference purposes, to the standard problem of optimal quantizer design for single-terminal reconstruction purposes. In this way, we are allowed to use the many extant methods and tools in the area of quantization. Proof of the Proposition. We must prove the equivalence between (6) and (7). To this aim we exploit eq. (3) that implies, in the asymptotic regime, f θ (x) x f θ (x) x E θ [ θ q ] θ, VAR θ [ θ q ] ni q (0). (8) The implication is that, instead of minimizing the, one can maximize I q (0). Furthermore, as said earlier, c q is both the score of the discrete valued random variable q and the MMSE estimate of g(x) given q, with respect to f 0 (x). This implies (averages are computed under f 0 (x)): I(0) = E[g (x)] = E[(g(x) c q + c q ) ] = E[(g(x) c q ) ] + E[c q] + E[(g(x) c q )c q ]. The last addend of the above equation is zero, as a consequence of the orthogonality principle (g(x) c q ) c q, which holds true because c q is an MMSE estimate [38]. Hence, I(0) = E[(g(x) c q ) ] + E[c q] = ɛ(g(x), c q ) + I q (0). (9) We have thus shown that: Minimize maximize I q (0) minimize ɛ(g(x), c q ). IV. APPLICATIONS We now consider examples of applications to a decentralized estimation problem in a WSN. Specifically, we are interested in the rate distortion (in our setting, number of bits versus estimation MSE) behavior in a WSN engaged in the task of estimating θ, knowing that θ lies in some small neighborhood of 0. Recall that the i th sensor observes x i and computes g(x i ); this latter is quantized by the Lloyd & Max algorithm [38], as described earlier. Finally, the fusion center receives the quantized sensors outputs and provides the final estimate (), according to the LOE approach. A. Gaussian distribution Consider first a Gaussian problem, in which the mean of Gaussian observations is to be estimated, assuming known variance. It is straightforward to show that the optimal nonlinearity in eq. (4) is g(x) x : The optimal estimator fuses the original (untransformed) observations x i and, as a consequence, the attempt here is to recover at the FC the original observations {x i } n with the best possible fidelity. The relevant implication is that the LOE estimation-oriented quantization scheme reduces to the standard reconstructionoriented one. Our approach yields no benefit in this case 7. 7 Actually, this comes with little surprise as there are many results on Gaussian problems giving similar insights, see also []. g(x) x g(x) x Fig.. Probability density functions of the sensors observations and the associated optimal nonlinearities, in the case of a mixture of Gaussians. The left panels refer to a mixture (α = 0.7) of Gaussians with mean values θ = 0. and µ =, respectively, and with equal variances σ = σ = ; in the right panels the Gaussians have the same mean θ = 0. but different variances σ =, σ = 3 (see first and second case in Sect. IV-B). B. Mixture of Gaussians As a non-guassian example, let us consider a simple mixture of two Gaussians: f θ (x) = α G(x; µ, σ ) + ( α) G(x; µ, σ ), (0) where α is the coefficient of the mixture and G(x; µ, σ ) stands for the Gaussian pdf with mean µ and variance σ, computed at x. The dependence upon θ is embedded in µ and σ. Indeed, we consider two possibilities. First case. Different mean: µ = θ, µ = µ. The score log f θ (x)/ θ is easily evaluated as α G(x; θ, σ (x θ) )/σ α G(x; θ, σ ) + ( α) G(x; µ, () σ ), so yielding, according to eq. (4), the nonlinearity α G(x; 0, σ g(x) = x )/σ α G(x; 0, σ ) + ( α) G(x; µ, () σ ). Second case. Same mean: µ = µ = θ. The score log f θ (x)/ θ is (x θ) α G(x; θ, σ )/σ + ( α) G(x; θ, σ )/σ α G(x; θ, σ ) + ( α) G(x; θ, σ ), (3) and the nonlinearity becomes x α G(x; 0, σ )/σ + ( α) G(x; 0, σ )/σ α G(x; 0, σ ) + ( α) G(x; 0, σ ). (4) Figure shows the pdfs f θ (x) for the two above cases, along with the shape of the corresponding optimal nonlinearities g(x), see eq. (4). Note that g(x) may not be a one-to-one function. Accordingly, the LOE system may employ irregular quantizers with unconnected quantization regions as mapped back to the original observation x, thus reflecting the specific peculiarities of the estimation problem. A similar result is

6 6 SUBMITTED TO IEEE TRANS. ON SIGNAL PROCESSING ON NOVEMBER 5, 006. REVISED ON APRIL 007 LOE, sim. LOE, th. LOE, sim. LOE, th. 5.5 x LOE ML reprod..7 x LOE ML reprod R= R= R=4 R= n = γ / θ n = γ / θ R R Fig.. MSE of the LOE compared to its asymptotic value [ni q (0)], for a mixture of Gaussians. Note that γ = 3 is held fixed so that when n increases, θ decreases as n /. The left panel refers to a mixture of Gaussians with µ =, σ = σ =, α = 0.7. In the right panel µ = θ, σ =, σ = 3. obtained when designing locally optimum quantizers in the context of detection systems, see [9]. Figure shows the convergence of the LOE s MSE to its limit value of [ni q (0)], assuming that θ scales as / n, as prescribed by the asymptotic theory. The MSE is here computed via standard Monte Carlo simulations. The values (0), on the other hand, are computed numerically but do not require Monte Carlo runs. Apart from statistical fluctuations due to the limited number of runs of the numerical experiments (here 0 4 ), the conclusion is that the MSE of the designed system converges to the one predicted by the theory. of I q C. Comparison with reproduction-oriented schemes The LOE theory provides an estimation-oriented quantization scheme. Conversely, the classical reproduction-oriented quantization design is aimed at recovering the original observations x i s as accurately as possible at the fusion center. Thus, the quantization stage completely ignores the fact that the final aim of the system is to estimate a parameter embedded in the observations, rather than recovering the observations themselves. To compare the performance of the LOE with those of classical (meaning reproduction-oriented) schemes, we consider the case that the Lloyd & Max procedure is designed to minimize the reproduction MSE 8, assuming that the design algorithm runs using the true distribution f θ (x). As this is actually unknown, this competing quantizer is actually clairvoyant and unrealizable. We assume further that the FC is able to compute the ML estimate from these reproduction-oriented quantized data. Given the large values of n in the following examples, it is expected that the asymptotic properties of the ML estimator are attained, so that in some sense we are comparing our 8 To be more explicit, reproduction-oriented quantization still uses the Lloyd & Max algorithm, but this is run over x i rather than over g(x i ). We know that it is just the shape of this nonlinearity that specifically accounts for the peculiarities of the estimation problem. Fig. 3. MSE of the LOE compared to an ML estimator with clairvoyant quantizers optimized for reproduction purposes, around the true value of θ. Here θ = 0., n = 0 3, and the number of Monte Carlo runs is 0 4. The left panel refers to a mixture of Gaussians with µ =, σ = σ =, α = 0.7, while the right panel refers to µ = θ, σ =, σ = 3 (see first and second case in Sect. IV-B). LOE with nearly the best that a reproduction-oriented scheme can do, with the further advantage for this latter of being clairvoyant. We make reference again to the mixture of Gaussians as detailed in Sect. IV-B. In Fig. 3, θ = 0. and n = 0 3 are given, and the rate (number of bits) distortion (estimation MSE) characteristic is shown, for both the LOE and the ML with reproduction-oriented clairvoyant quantizers. The left panel addresses the case where µ = θ, µ = µ (first case of Sect. IV-B). The worst rate distortion law pertains to the clairvoyant quantizer. We see that the LOE allows for remarkable gains, especially at low bit-rate. An intuitive explanation is as follows. The known and positive mean µ makes the right-tail of density f θ (x) heavy, and the reproduction-oriented system takes this into account. On the other hand, the estimation-oriented optimization focuses on the shape of f θ (x) in the proximity of the origin (it is known that θ 0) while paying less attention to the right tail, as evidenced by the shape of g(x) (see left panels in Fig. ). The quantizers partition regions reflect these differences, which are stronger in the limit of hard quantization. We want also to emphasize the behavior at large rates, where the curves tend to approach a constant value. In fact, once the ɛ term in eq. (9) becomes small with respect to the Fisher information I(0), little is gained by further reducing ɛ via an increment of the bit rate, according to eq. (7). This does not mean that the optimization procedure is inefficient at large rates: Direct inspection of the ɛ term does reveal significant reduction of its value also in this regime. Simply, these variations are negligible with respect to I(0), which represents the ultimate bound on the inference performances. Consider now right panel in Fig. 3, where µ = µ = θ (second case in Sect. IV-B). With one-bit quantization the different systems perform similarly. This is an obvious consequence of the inherent symmetry of distribution f 0 (x). At larger bit rates, the reproduction-based approach is outperformed by the

7 MARANO et al.: DISTRIBUTED ESTIMATION IN LARGE WIRELESS SENSOR NETWORKS VIA A LOCALLY OPTIMUM APPROACH 7 MSE θ θ MSE θ θ Fig. 4. MSE for the LOE strategy as a function of θ, for n = 0, 0 3, 0 4, with the almost horizontal lines referring to the MSE of the ML strategy. Upper-left and upper-right panels refer to first and second case addressed in Sect. IV-B, respectively, for the quantized case with R = bits and reproduction-oriented ML; the relevant parameters are as in Fig.3. Lower-left and lower-right panels refer to first and second case addressed in Sect. IV-B, respectively, for the unquantized case with the relevant parameters as in Fig.3. inference-oriented system. Note also that no distortion gain is achieved in the ML case by increasing the bit rate from to. While these effects are in practice remediated by some sort of time-sharing which makes the rate distortion curve convex, this is a clear evidence of the resource wasting that may be suffered by reproduction-oriented approaches. D. Error analysis In the previous section we saw that the LOE may well outperform reproduction-oriented strategies, provided that θ is sufficiently close to 0. What is the range of values of θ such that our locally optimum approach works acceptably? And how does the LOE behave when θ and/or n are held fixed, compared to strategies other than those considered in Sect. IV- C? While a case-by-case study is required to provide complete answers, some general insights can be gained by examining eqs. (4) and (5), and recalling that the bias term dominates in the limit of n, for fixed θ, yielding b (θ). To elaborate on these aspects, let us define the class C of estimation strategies with MSE satisfying D n (θ) = α(θ), where α(0) > n I q (0). (5) The latter inequality ensures that estimators in the class C cannot asymptotically outperform the LOE at the nominal point, as is reasonable to assume (the LOE being optimized at that point). Note the absence of a bias term in eq. (5) and the right scaling law D n n, compared to the in eq. (4). The relevant fact here is that, invoking continuity arguments, it is not difficult to envisage that for any fixed n there always exists a minimal range of values of θ around the nominal point such that the LOE strategy outperforms any strategy belonging to C. Coming back to the examples studied in Sect. IV-B and armed with the above analysis, we investigate the LOE s MSE as θ moves away from its nominal value. In Fig. 4 we use eq. (4) to depict the MSE of the LOE as a function of θ, for different values of n, and a fixed bit rate. The figure also shows the error of the reproduction-oriented ML previously considered. To avoid cumbersome MC simulations for different values of n and θ, we adopt the inverse of Fisher information pertaining to the reproduction-oriented quantized data as an approximation of the ML performances, having verified for a coarse set of points that this represents a fair approximation. The upper-left and upper-right panels of Fig. 4 refer to the first and second case addressed in Sect. IV-B, respectively. The asymmetry of the curves on the left reflects the asymmetry of the underlying distributions. As n increases, it is seen that the useful range shrinks down, and the MSE is regulated more and more by the bias. The same happens for the symmetric case shown in the upper-right panel. In order to show how the benefits of the LOE are strictly related to the optimality of the quantization method proposed in the paper, let us analyze finally the case of unquantized observations. In the lower panels of Fig. 4, the MSE of the unquantized LOE are depicted, along with the corresponding performances (Fisher proxy) of the ML estimator. We see that at the nominal point the LOE attains the same MSE of the ML, as expected in absence of quantization. Moving away from θ = 0, the continuity arguments following eq. (5) fail (at the nominal point the LOE is not in advantage), and the behavior becomes not easily predictable. For instance, in the lower-left panel of Fig. 4, there occasionally exist values of n and a (right) neighborhood of the nominal point where the LOE outperforms the ML, a fact that cannot be excluded a- priori, the LOE being biased, even asymptotically. More or less obviously, these results and behaviors should not be taken as the general rule, as witnessed by the lower-right panel. V. SUMMARY In this paper we have considered the estimation of an unknown quantity θ by a network of n independent sensors connected in parallel to a fusion center. The communication is band-limited, and hence a quantized version of each sensor s observation is sent. How should these quantizers be designed? One idea is to quantize for maximum fidelity of the reconstructed observation from each sensor. This is appealing and lends access to a significant literature on optimal quantization, but in cases of estimation that are more interesting than, say, estimation of an unknown mean in Gaussian noise, the strategy is suboptimal. The essence of this paper is that we will quantize the score of the observations assuming that θ = 0, and we call this locally optimal estimation (LOE). We take a hint from locally optimal detection, for which best reconstruction fidelity for a transformed observation distorted via the locally optimal nonlinearity, from a Taylor approximation of the optimal log likelihood ratio statistic taken at signal-strength θ = 0 is asymptotically optimal. In the Gaussian case LOE provides nothing new. But in more interesting cases we find that the quantizers designed

8 8 SUBMITTED TO IEEE TRANS. ON SIGNAL PROCESSING ON NOVEMBER 5, 006. REVISED ON APRIL 007 for the inference task may be drastically different from those that one would obtain by attempting simply to reconstruct the sensors observations. These can be highly irregular (e.g., with multiply connected quantization regions), reflecting the specific way that the unknown parameter θ is embedded in the observation. As a consequence, the performance improvement of the LOE-based inference over a reconstruction-oriented quantization may be very remarkable. We investigate the range of θ in which the LOE approach is effective. As one may expect, a case-by-case study is required; however, some general results can be stated. If one considers the class of estimators such that, by increasing the number n of iid observations, the MSE decreases with the optimal scaling law n (but that are not better than the LOE at the nominal point), there always exists an interval centered on θ = 0 where the LOE performs better. ACKNOWLEDGMENT The authors would like to thank the anonymous Reviewers for useful interpretation hints. REFERENCES [] C. Chong and S. Kumar, Sensor networks: Evolution, opportunities, and challenges, Proc. IEEE, vol. 9, no. 8, pp , Aug [] H. Gharavi and S. Kumar, Scanning the issue special issue on sensor network and applications, Proc. IEEE, vol. 9, no. 8, pp. 5 53, Aug [3] A. Krasnopeev, J. Xiao, and Z. Luo, Minimum energy decentralized estimation in a wireless sensor network with correlated sensor noises, EURASIP Journal on Wireless Communications and Networking, no. 4, pp , 005. [4] R. Niu and P. K. Varshney, Distributed detection and fusion in a large wireless sensor network of random size, EURASIP Journal on Wireless Communications and Networking, no. 4, pp , 005. [5] S. S. Pradhan, J. Kusuma, and K. Ramchandran, Distributed compression in a dense microsensor network, IEEE Signal Processing Mag., vol. 9, pp. 5 60, Mar. 00. [6] Z. Yang, M. Dong, L. Tong, and B. M. Sadler, MAC protocols for optimal information retrieval pattern in sensor networks with mobile access, EURASIP Journal on Wireless Communications and Networking, no. 4, pp , 005. [7] F. Zhao, J. Liu, J. Liu, L. Guibas, and J. Reich, Collaborative signal and information processing: An information-directed approach, Proc. IEEE, vol. 9, no. 8, pp , Aug [8] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Englewood Cliffs, New Jersey: PTR Prentice Hall, 993. [9] L. M. Le Cam, Théorie asymptotique de la décision statistique. Les presses de l université de Montréal, 969. [0] J. Hàjek, Local asymptotic minimax and admissibility in estimation, in Proc. of the 6th Berkeley Symposium on Mathematical Statistics and Probability, vol.. Berkeley: University of California Press, 97, pp [] L. Le Cam, Limits of experiments, in Proc. of the 6th Berkeley Symposium on Mathematical Statistics and Probability, vol.. Berkeley: University of California Press, 97, pp [], On a theorem of J. Hàjek, in Contributions to Statistics-Hàjek Memorial Volume, Jurečkovà, Ed. Prague: Akademia, 979, pp [3] R. A. Fisher, On the mathematical foundations of theoretical statistics, Philosophical Transactions of the Royal Society of London, vol. A-, pp , 9. [4] L. Le Cam and G. Yang, Asymptotics in Statistics: Some Basic Concepts. Springer, 000. [5] B. Levin, Theoretical principles of statistical radio engineering. Moscow: MIR, 976. [6] C. C. Lee and L. A. Longley, Nonparametric estimation algorithms based on input quantization, IEEE Trans. Inform. Theory, vol. 3, no. 5, pp , Sept [7] A. M. Maras, Threshold parameter estimation in nonadditive non- Gaussian noise, IEEE Trans. Signal Processing, vol. 45, no. 7, pp , July 997. [8] W. M. Lam and A. R. Reibman, Design of quantizers for decentralized estimation systems, IEEE Trans. Commun., vol. 4, no., pp , Nov [9] M. Longo, T. Lookabaugh, and R. Gray, Quantization for decentralized hypothesis testing under communication constraints, IEEE Trans. Inform. Theory, vol. 36, no., pp. 4 55, Mar [0] J. A. Gubner, Distributed estimation and quantization, IEEE Trans. Inform. Theory, vol. 39, no. 4, pp , July 993. [] S. Marano, V. Matta, and P. Willett, Quantizer precision for distributed estimation in a large sensor network, IEEE Trans. Signal Processing, vol. 54, no. 0, pp , Oct [], Asymptotic design of quantizers for decentralized MMSE estimation, IEEE Trans. Signal Processing, in print. [3] P. Venkitasubramaniam, L. Tong, and A. Swami, Score-function quantization for distributed estimation, in Conference on Information Sciences and Systems 006 (CISS 06), Princeton, NJ, USA, Mar [4], Minimax quantization for distributed estimation, in 006 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 06), Toulouse, France, May 006. [5] T. S. Han and S. Amari, Statistical inference under multiterminal data compression, IEEE Trans. Inform. Theory, vol. IT-44, no. 6, pp , Oct [6] R. Viswanathan and P. K. Varshney, Distributed detection with multiple sensors: Part I fundamentals, Proc. IEEE, vol. 85, no., pp , Jan [7] R. S. Blum, A. Kassam, and H. V. Poor, Distributed detection with multiple sensors: Part II advanced topics, Proc. IEEE, vol. 85, no., pp , Jan [8] J.-F. Chamberland and V. V. Veeravalli, Decentralized detection in sensor networks, IEEE Trans. Signal Processing, vol. 5, no., pp , Feb [9] S. A. Kassam, Signal Detection in Non-Gaussian Noise. Springer- Verlag, 987. [30], Optimum quantization for signal detection, IEEE Trans. Commun., vol. 5, pp , May 977. [3] H. Shao, Mathematical Statistics, nd ed. Springer, 003. [3] M. Gastpar and M. Vetterli, On the capacity of large Gaussian relay networks, IEEE Trans. Inform. Theory, vol. IT-5, no. 3, Mar [33] G. Mergen and L. Tong, Type based estimation over multiaccess channels, IEEE Trans. Signal Processing, vol. 54, no., pp , Feb [34] G. Scutari, S. Barbarossa, and L. Pescosolido, Optimal decentralized estimation through self-synchronizing networks in the presence of propagation delays, in Signal Processing Advances in Wireless Communications (SPAWC 06), Cannes, France, July 006. [35] V. Genon-Catalot and D. Picard, Eléments de statistique asymptotique. New York: Springer-Verlag, 993. [36] P. J. Huber, A robust version of the probability ratio test, Ann. Math. Statist., vol. 36, pp , Dec [37] H. Cramér, Mathematical Methods of Statistics. Princeton, NJ: Princeton University Press, 948. [38] A. Gersho and R. M. Gray, Vector Quantization and Signal Compression. Norwell, MA: Kluwer Academic Publishers, 99.

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS Parvathinathan Venkitasubramaniam, Gökhan Mergen, Lang Tong and Ananthram Swami ABSTRACT We study the problem of quantization for

More information

certain class of distributions, any SFQ can be expressed as a set of thresholds on the sufficient statistic. For distributions

certain class of distributions, any SFQ can be expressed as a set of thresholds on the sufficient statistic. For distributions Score-Function Quantization for Distributed Estimation Parvathinathan Venkitasubramaniam and Lang Tong School of Electrical and Computer Engineering Cornell University Ithaca, NY 4853 Email: {pv45, lt35}@cornell.edu

More information

3596 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 7, JULY 2007

3596 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 7, JULY 2007 3596 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 7, JULY 2007 Quantization for Maximin ARE in Distributed Estimation Parvathinathan Venkitasubramaniam, Student Member, IEEE, Lang Tong, Fellow,

More information

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Hyperplane-Based Vector Quantization for Distributed Estimation in Wireless Sensor Networks Jun Fang, Member, IEEE, and Hongbin

More information

Decentralized Detection in Sensor Networks

Decentralized Detection in Sensor Networks IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 51, NO 2, FEBRUARY 2003 407 Decentralized Detection in Sensor Networks Jean-François Chamberland, Student Member, IEEE, and Venugopal V Veeravalli, Senior Member,

More information

Censoring for Type-Based Multiple Access Scheme in Wireless Sensor Networks

Censoring for Type-Based Multiple Access Scheme in Wireless Sensor Networks Censoring for Type-Based Multiple Access Scheme in Wireless Sensor Networks Mohammed Karmoose Electrical Engineering Department Alexandria University Alexandria 1544, Egypt Email: mhkarmoose@ieeeorg Karim

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

Detection Performance Limits for Distributed Sensor Networks in the Presence of Nonideal Channels

Detection Performance Limits for Distributed Sensor Networks in the Presence of Nonideal Channels 1 Detection Performance imits for Distributed Sensor Networks in the Presence of Nonideal Channels Qi Cheng, Biao Chen and Pramod K Varshney Abstract Existing studies on the classical distributed detection

More information

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko

Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 12, DECEMBER 2010 1005 Optimal Mean-Square Noise Benefits in Quantizer-Array Linear Estimation Ashok Patel and Bart Kosko Abstract A new theorem shows that

More information

Fusion of Decisions Transmitted Over Fading Channels in Wireless Sensor Networks

Fusion of Decisions Transmitted Over Fading Channels in Wireless Sensor Networks Fusion of Decisions Transmitted Over Fading Channels in Wireless Sensor Networks Biao Chen, Ruixiang Jiang, Teerasit Kasetkasem, and Pramod K. Varshney Syracuse University, Department of EECS, Syracuse,

More information

Cooperative Spectrum Sensing for Cognitive Radios under Bandwidth Constraints

Cooperative Spectrum Sensing for Cognitive Radios under Bandwidth Constraints Cooperative Spectrum Sensing for Cognitive Radios under Bandwidth Constraints Chunhua Sun, Wei Zhang, and haled Ben Letaief, Fellow, IEEE Department of Electronic and Computer Engineering The Hong ong

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

Asymptotically Optimal and Bandwith-efficient Decentralized Detection

Asymptotically Optimal and Bandwith-efficient Decentralized Detection Asymptotically Optimal and Bandwith-efficient Decentralized Detection Yasin Yılmaz and Xiaodong Wang Electrical Engineering Department, Columbia University New Yor, NY 10027 Email: yasin,wangx@ee.columbia.edu

More information

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE

More information

ROBUST MINIMUM DISTANCE NEYMAN-PEARSON DETECTION OF A WEAK SIGNAL IN NON-GAUSSIAN NOISE

ROBUST MINIMUM DISTANCE NEYMAN-PEARSON DETECTION OF A WEAK SIGNAL IN NON-GAUSSIAN NOISE 17th European Signal Processing Conference EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 ROBUST MIIMUM DISTACE EYMA-PEARSO DETECTIO OF A WEAK SIGAL I O-GAUSSIA OISE Georgy Shevlyakov, Kyungmin Lee,

More information

APOCS: A RAPIDLY CONVERGENT SOURCE LOCALIZATION ALGORITHM FOR SENSOR NETWORKS. Doron Blatt and Alfred O. Hero, III

APOCS: A RAPIDLY CONVERGENT SOURCE LOCALIZATION ALGORITHM FOR SENSOR NETWORKS. Doron Blatt and Alfred O. Hero, III APOCS: A RAPIDLY CONVERGENT SOURCE LOCALIZATION ALGORITHM FOR SENSOR NETWORKS Doron Blatt and Alfred O. Hero, III Department of Electrical Engineering and Computer Science, University of Michigan, Ann

More information

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality

A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality 0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California

More information

WIRELESS sensor networks (WSNs) comprise a large

WIRELESS sensor networks (WSNs) comprise a large IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 3, MARCH 2006 1131 Bandwidth-Constrained Distributed Estimation for Wireless Sensor Networks Part I: Gaussian Case Alejandro Ribeiro, Student Member,

More information

SUCCESSIVE refinement of information, or scalable

SUCCESSIVE refinement of information, or scalable IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 1983 Additive Successive Refinement Ertem Tuncel, Student Member, IEEE, Kenneth Rose, Fellow, IEEE Abstract Rate-distortion bounds for

More information

A General Overview of Parametric Estimation and Inference Techniques.

A General Overview of Parametric Estimation and Inference Techniques. A General Overview of Parametric Estimation and Inference Techniques. Moulinath Banerjee University of Michigan September 11, 2012 The object of statistical inference is to glean information about an underlying

More information

On Source-Channel Communication in Networks

On Source-Channel Communication in Networks On Source-Channel Communication in Networks Michael Gastpar Department of EECS University of California, Berkeley gastpar@eecs.berkeley.edu DIMACS: March 17, 2003. Outline 1. Source-Channel Communication

More information

Application-Oriented Estimator Selection

Application-Oriented Estimator Selection IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 4, APRIL 2015 489 Application-Oriented Estimator Selection Dimitrios Katselis, Member, IEEE, and Cristian R. Rojas, Member, IEEE Abstract Designing the optimal

More information

Reliable Computation over Multiple-Access Channels

Reliable Computation over Multiple-Access Channels Reliable Computation over Multiple-Access Channels Bobak Nazer and Michael Gastpar Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, 94720-1770 {bobak,

More information

Decentralized Detection In Wireless Sensor Networks

Decentralized Detection In Wireless Sensor Networks Decentralized Detection In Wireless Sensor Networks Milad Kharratzadeh Department of Electrical & Computer Engineering McGill University Montreal, Canada April 2011 Statistical Detection and Estimation

More information

On Common Information and the Encoding of Sources that are Not Successively Refinable

On Common Information and the Encoding of Sources that are Not Successively Refinable On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa

More information

Optimal Distributed Detection Strategies for Wireless Sensor Networks

Optimal Distributed Detection Strategies for Wireless Sensor Networks Optimal Distributed Detection Strategies for Wireless Sensor Networks Ke Liu and Akbar M. Sayeed University of Wisconsin-Madison kliu@cae.wisc.edu, akbar@engr.wisc.edu Abstract We study optimal distributed

More information

Optimal Power Control in Decentralized Gaussian Multiple Access Channels

Optimal Power Control in Decentralized Gaussian Multiple Access Channels 1 Optimal Power Control in Decentralized Gaussian Multiple Access Channels Kamal Singh Department of Electrical Engineering Indian Institute of Technology Bombay. arxiv:1711.08272v1 [eess.sp] 21 Nov 2017

More information

Decentralized Detection in Wireless Sensor Networks with Channel Fading Statistics

Decentralized Detection in Wireless Sensor Networks with Channel Fading Statistics 1 Decentralized Detection in Wireless Sensor Networks with Channel Fading Statistics Bin Liu, Biao Chen Abstract Existing channel aware signal processing design for decentralized detection in wireless

More information

WE consider the classical problem of distributed detection

WE consider the classical problem of distributed detection 16 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 57, NO 1, JANUARY 2009 Distributed Detection in the Presence of Byzantine Attacks Stefano Marano, Vincenzo Matta, Lang Tong, Fellow, IEEE Abstract Distributed

More information

Gaussian Estimation under Attack Uncertainty

Gaussian Estimation under Attack Uncertainty Gaussian Estimation under Attack Uncertainty Tara Javidi Yonatan Kaspi Himanshu Tyagi Abstract We consider the estimation of a standard Gaussian random variable under an observation attack where an adversary

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Jialing Liu liujl@iastate.edu Sekhar Tatikonda sekhar.tatikonda@yale.edu Nicola Elia nelia@iastate.edu Dept. of

More information

A Few Notes on Fisher Information (WIP)

A Few Notes on Fisher Information (WIP) A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties

More information

Asymptotic Design of Quantizers for Decentralized MMSE Estimation

Asymptotic Design of Quantizers for Decentralized MMSE Estimation Asymptotic Design of Quantizers for Decentralized MMSE Estimation Stefano Marano, Vincenzo Matta, Peter Willett, Fellow, IEEE DRAFT update September 9, 5 Abstract Conceptual and practical encoding/decoding,

More information

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN A Thesis Presented to The Academic Faculty by Bryan Larish In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems 2382 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 5, MAY 2011 Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems Holger Boche, Fellow, IEEE,

More information

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O. SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM Neal Patwari and Alfred O. Hero III Department of Electrical Engineering & Computer Science University of

More information

Distributed Binary Quantizers for Communication Constrained Large-scale Sensor Networks

Distributed Binary Quantizers for Communication Constrained Large-scale Sensor Networks Distributed Binary Quantizers for Communication Constrained Large-scale Sensor Networks Ying Lin and Biao Chen Dept. of EECS Syracuse University Syracuse, NY 13244, U.S.A. ylin20 {bichen}@ecs.syr.edu Peter

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.2 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

On the estimation of the K parameter for the Rice fading distribution

On the estimation of the K parameter for the Rice fading distribution On the estimation of the K parameter for the Rice fading distribution Ali Abdi, Student Member, IEEE, Cihan Tepedelenlioglu, Student Member, IEEE, Mostafa Kaveh, Fellow, IEEE, and Georgios Giannakis, Fellow,

More information

Quantization for Distributed Estimation

Quantization for Distributed Estimation 0 IEEE International Conference on Internet of Things ithings 0), Green Computing and Communications GreenCom 0), and Cyber-Physical-Social Computing CPSCom 0) Quantization for Distributed Estimation uan-yu

More information

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS

UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS UNIFORMLY MOST POWERFUL CYCLIC PERMUTATION INVARIANT DETECTION FOR DISCRETE-TIME SIGNALS F. C. Nicolls and G. de Jager Department of Electrical Engineering, University of Cape Town Rondebosch 77, South

More information

Robust Binary Quantizers for Distributed Detection

Robust Binary Quantizers for Distributed Detection 1 Robust Binary Quantizers for Distributed Detection Ying Lin, Biao Chen, and Bruce Suter Abstract We consider robust signal processing techniques for inference-centric distributed sensor networks operating

More information

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR 1 Stefano Rini, Daniela Tuninetti and Natasha Devroye srini2, danielat, devroye @ece.uic.edu University of Illinois at Chicago Abstract

More information

Optimal Sensor Rules and Unified Fusion Rules for Multisensor Multi-hypothesis Network Decision Systems with Fading Channels

Optimal Sensor Rules and Unified Fusion Rules for Multisensor Multi-hypothesis Network Decision Systems with Fading Channels Optimal Sensor Rules and Unified Fusion Rules for Multisensor Multi-hypothesis Network Decision Systems with Fading Channels Qing an Ren Yunmin Zhu Dept. of Mathematics Sichuan University Sichuan, China

More information

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O. SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM Neal Patwari and Alfred O. Hero III Department of Electrical Engineering & Computer Science University of

More information

Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels

Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels Energy Efficient Estimation of Gaussian Sources Over Inhomogeneous Gaussian MAC Channels Shuangqing Wei, Ragopal Kannan, Sitharama Iyengar and Nageswara S. Rao Abstract In this paper, we first provide

More information

Some Aspects of DOA Estimation Using a Network of Blind Sensors

Some Aspects of DOA Estimation Using a Network of Blind Sensors Some Aspects of DOA Estimation Using a Network of Blind Sensors M. Guerriero a S. Marano b, V. Matta b P. Willett a a ECE Department, U-157, University of Connecticut, Storrs, CT 0669 USA. b DIIIE, University

More information

Decision Fusion With Unknown Sensor Detection Probability

Decision Fusion With Unknown Sensor Detection Probability 208 IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 2, FEBRUARY 2014 Decision Fusion With Unknown Sensor Detection Probability D. Ciuonzo, Student Member, IEEE, P.SalvoRossi, Senior Member, IEEE Abstract

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

ADDITIVE noise is most often represented by a fixed

ADDITIVE noise is most often represented by a fixed IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998 947 Maximin Performance of Binary-Input Channels with Uncertain Noise Distributions Andrew L. McKellips, Student Member, IEEE, Sergio Verdú,

More information

Communication constraints and latency in Networked Control Systems

Communication constraints and latency in Networked Control Systems Communication constraints and latency in Networked Control Systems João P. Hespanha Center for Control Engineering and Computation University of California Santa Barbara In collaboration with Antonio Ortega

More information

Fast Near-Optimal Energy Allocation for Multimedia Loading on Multicarrier Systems

Fast Near-Optimal Energy Allocation for Multimedia Loading on Multicarrier Systems Fast Near-Optimal Energy Allocation for Multimedia Loading on Multicarrier Systems Michael A. Enright and C.-C. Jay Kuo Department of Electrical Engineering and Signal and Image Processing Institute University

More information

EIE6207: Estimation Theory

EIE6207: Estimation Theory EIE6207: Estimation Theory Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: Steven M.

More information

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, 93106

More information

6.1 Variational representation of f-divergences

6.1 Variational representation of f-divergences ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 Lecture 6: Variational representation, HCR and CR lower bounds Lecturer: Yihong Wu Scribe: Georgios Rovatsos, Feb 11, 2016

More information

2784 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 7, JULY 2006

2784 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 7, JULY 2006 2784 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 7, JULY 2006 Bandwidth-Constrained Distributed Estimation for Wireless Sensor Networks Part II: Unknown Probability Density Function Alejandro

More information

Estimating Gaussian Mixture Densities with EM A Tutorial

Estimating Gaussian Mixture Densities with EM A Tutorial Estimating Gaussian Mixture Densities with EM A Tutorial Carlo Tomasi Due University Expectation Maximization (EM) [4, 3, 6] is a numerical algorithm for the maximization of functions of several variables

More information

Joint Source-Channel Coding for the Multiple-Access Relay Channel

Joint Source-Channel Coding for the Multiple-Access Relay Channel Joint Source-Channel Coding for the Multiple-Access Relay Channel Yonathan Murin, Ron Dabora Department of Electrical and Computer Engineering Ben-Gurion University, Israel Email: moriny@bgu.ac.il, ron@ee.bgu.ac.il

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.1 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

On Optimal Coding of Hidden Markov Sources

On Optimal Coding of Hidden Markov Sources 2014 Data Compression Conference On Optimal Coding of Hidden Markov Sources Mehdi Salehifar, Emrah Akyol, Kumar Viswanatha, and Kenneth Rose Department of Electrical and Computer Engineering University

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

A Sufficient Condition for Optimality of Digital versus Analog Relaying in a Sensor Network

A Sufficient Condition for Optimality of Digital versus Analog Relaying in a Sensor Network A Sufficient Condition for Optimality of Digital versus Analog Relaying in a Sensor Network Chandrashekhar Thejaswi PS Douglas Cochran and Junshan Zhang Department of Electrical Engineering Arizona State

More information

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only

MMSE Dimension. snr. 1 We use the following asymptotic notation: f(x) = O (g(x)) if and only MMSE Dimension Yihong Wu Department of Electrical Engineering Princeton University Princeton, NJ 08544, USA Email: yihongwu@princeton.edu Sergio Verdú Department of Electrical Engineering Princeton University

More information

ANALYSIS OF A PARTIAL DECORRELATOR IN A MULTI-CELL DS/CDMA SYSTEM

ANALYSIS OF A PARTIAL DECORRELATOR IN A MULTI-CELL DS/CDMA SYSTEM ANAYSIS OF A PARTIA DECORREATOR IN A MUTI-CE DS/CDMA SYSTEM Mohammad Saquib ECE Department, SU Baton Rouge, A 70803-590 e-mail: saquib@winlab.rutgers.edu Roy Yates WINAB, Rutgers University Piscataway

More information

On Information Maximization and Blind Signal Deconvolution

On Information Maximization and Blind Signal Deconvolution On Information Maximization and Blind Signal Deconvolution A Röbel Technical University of Berlin, Institute of Communication Sciences email: roebel@kgwtu-berlinde Abstract: In the following paper we investigate

More information

بسم الله الرحمن الرحيم

بسم الله الرحمن الرحيم بسم الله الرحمن الرحيم Reliability Improvement of Distributed Detection in Clustered Wireless Sensor Networks 1 RELIABILITY IMPROVEMENT OF DISTRIBUTED DETECTION IN CLUSTERED WIRELESS SENSOR NETWORKS PH.D.

More information

Capacity of a Two-way Function Multicast Channel

Capacity of a Two-way Function Multicast Channel Capacity of a Two-way Function Multicast Channel 1 Seiyun Shin, Student Member, IEEE and Changho Suh, Member, IEEE Abstract We explore the role of interaction for the problem of reliable computation over

More information

Information Theory Meets Game Theory on The Interference Channel

Information Theory Meets Game Theory on The Interference Channel Information Theory Meets Game Theory on The Interference Channel Randall A. Berry Dept. of EECS Northwestern University e-mail: rberry@eecs.northwestern.edu David N. C. Tse Wireless Foundations University

More information

Precoding for Decentralized Detection of Unknown Deterministic Signals

Precoding for Decentralized Detection of Unknown Deterministic Signals Precoding for Decentralized Detection of Unknown Deterministic Signals JUN FANG, Member, IEEE XIAOYING LI University of Electronic Science and Technology of China HONGBIN LI, Senior Member, IEEE Stevens

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

Broadcast Detection Structures with Applications to Sensor Networks

Broadcast Detection Structures with Applications to Sensor Networks Broadcast Detection Structures with Applications to Sensor Networks Michael A. Lexa * and Don H. Johnson Department of Electrical and Computer Engineering Rice University, Houston, TX 77251-1892 amlexa@rice.edu,

More information

Advanced Signal Processing Introduction to Estimation Theory

Advanced Signal Processing Introduction to Estimation Theory Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,

More information

Co-Prime Arrays and Difference Set Analysis

Co-Prime Arrays and Difference Set Analysis 7 5th European Signal Processing Conference (EUSIPCO Co-Prime Arrays and Difference Set Analysis Usham V. Dias and Seshan Srirangarajan Department of Electrical Engineering Bharti School of Telecommunication

More information

Estimators as Random Variables

Estimators as Random Variables Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maimum likelihood Consistency Confidence intervals Properties of the mean estimator Introduction Up until

More information

A Systematic Description of Source Significance Information

A Systematic Description of Source Significance Information A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

1 Motivation for Instrumental Variable (IV) Regression

1 Motivation for Instrumental Variable (IV) Regression ECON 370: IV & 2SLS 1 Instrumental Variables Estimation and Two Stage Least Squares Econometric Methods, ECON 370 Let s get back to the thiking in terms of cross sectional (or pooled cross sectional) data

More information

Channel Allocation Using Pricing in Satellite Networks

Channel Allocation Using Pricing in Satellite Networks Channel Allocation Using Pricing in Satellite Networks Jun Sun and Eytan Modiano Laboratory for Information and Decision Systems Massachusetts Institute of Technology {junsun, modiano}@mitedu Abstract

More information

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks Ji an Luo 2008.6.6 Outline Background Problem Statement Main Results Simulation Study Conclusion Background Wireless

More information

Estimation Error Bounds for Frame Denoising

Estimation Error Bounds for Frame Denoising Estimation Error Bounds for Frame Denoising Alyson K. Fletcher and Kannan Ramchandran {alyson,kannanr}@eecs.berkeley.edu Berkeley Audio-Visual Signal Processing and Communication Systems group Department

More information

Lecture 11: Continuous-valued signals and differential entropy

Lecture 11: Continuous-valued signals and differential entropy Lecture 11: Continuous-valued signals and differential entropy Biology 429 Carl Bergstrom September 20, 2008 Sources: Parts of today s lecture follow Chapter 8 from Cover and Thomas (2007). Some components

More information

THE potential for large-scale sensor networks is attracting

THE potential for large-scale sensor networks is attracting IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 1, JANUARY 2007 327 Detection in Sensor Networks: The Saddlepoint Approximation Saeed A. Aldosari, Member, IEEE, and José M. F. Moura, Fellow, IEEE

More information

Sheppard s Correction for Variances and the "Quantization Noise Model"

Sheppard s Correction for Variances and the Quantization Noise Model Sheppard s Correction for Variances and the "Quantization Noise Model" by Stephen B. Vardeman * Statistics and IMSE Departments Iowa State University Ames, Iowa vardeman@iastate.edu November 6, 2004 Abstract

More information

Module 2. Random Processes. Version 2, ECE IIT, Kharagpur

Module 2. Random Processes. Version 2, ECE IIT, Kharagpur Module Random Processes Version, ECE IIT, Kharagpur Lesson 9 Introduction to Statistical Signal Processing Version, ECE IIT, Kharagpur After reading this lesson, you will learn about Hypotheses testing

More information

Complements on Simple Linear Regression

Complements on Simple Linear Regression Complements on Simple Linear Regression Terry R. McConnell Syracuse University March 16, 2015 Abstract We present a simple-minded approach to a variant of simple linear regression that seeks to minimize

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger

SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING Kenneth Zeger University of California, San Diego, Department of ECE La Jolla, CA 92093-0407 USA ABSTRACT An open problem in

More information

Optimal matching in wireless sensor networks

Optimal matching in wireless sensor networks Optimal matching in wireless sensor networks A. Roumy, D. Gesbert INRIA-IRISA, Rennes, France. Institute Eurecom, Sophia Antipolis, France. Abstract We investigate the design of a wireless sensor network

More information

Modulation of symmetric densities

Modulation of symmetric densities 1 Modulation of symmetric densities 1.1 Motivation This book deals with a formulation for the construction of continuous probability distributions and connected statistical aspects. Before we begin, a

More information

BAYESIAN DESIGN OF DECENTRALIZED HYPOTHESIS TESTING UNDER COMMUNICATION CONSTRAINTS. Alla Tarighati, and Joakim Jaldén

BAYESIAN DESIGN OF DECENTRALIZED HYPOTHESIS TESTING UNDER COMMUNICATION CONSTRAINTS. Alla Tarighati, and Joakim Jaldén 204 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) BAYESIA DESIG OF DECETRALIZED HYPOTHESIS TESTIG UDER COMMUICATIO COSTRAITS Alla Tarighati, and Joakim Jaldén ACCESS

More information

V. Properties of estimators {Parts C, D & E in this file}

V. Properties of estimators {Parts C, D & E in this file} A. Definitions & Desiderata. model. estimator V. Properties of estimators {Parts C, D & E in this file}. sampling errors and sampling distribution 4. unbiasedness 5. low sampling variance 6. low mean squared

More information

WITH the significant advances in networking, wireless

WITH the significant advances in networking, wireless IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 12, DECEMBER 2006 4519 Target Location Estimation in Sensor Networks With Quantized Data Ruixin Niu, Member, IEEE, and Pramod K. Varshney, Fellow, IEEE

More information

Transmission Schemes for Lifetime Maximization in Wireless Sensor Networks: Uncorrelated Source Observations

Transmission Schemes for Lifetime Maximization in Wireless Sensor Networks: Uncorrelated Source Observations Transmission Schemes for Lifetime Maximization in Wireless Sensor Networks: Uncorrelated Source Observations Xiaolu Zhang, Meixia Tao and Chun Sum Ng Department of Electrical and Computer Engineering National

More information

Target Localization in Wireless Sensor Networks with Quantized Data in the Presence of Byzantine Attacks

Target Localization in Wireless Sensor Networks with Quantized Data in the Presence of Byzantine Attacks Target Localization in Wireless Sensor Networks with Quantized Data in the Presence of Byzantine Attacks Keshav Agrawal, Aditya Vempaty, Hao Chen and Pramod K. Varshney Electrical Engineering Department,

More information

Distributed Estimation via Random Access

Distributed Estimation via Random Access SUBMITTED TO IEEE TRANSACTIONS ON INFORMATION THEORY, AUG. 2006. 1 Distributed Estimation via Random Access Animashree Anandkumar, Student Member, IEEE,, Lang Tong, Fellow, IEEE and Ananthram Swami Senior

More information

Lecture 35: December The fundamental statistical distances

Lecture 35: December The fundamental statistical distances 36-705: Intermediate Statistics Fall 207 Lecturer: Siva Balakrishnan Lecture 35: December 4 Today we will discuss distances and metrics between distributions that are useful in statistics. I will be lose

More information