SENSOR SELECTION FOR RANDOM FIELD ESTIMATION IN WIRELESS SENSOR NETWORKS

Size: px
Start display at page:

Download "SENSOR SELECTION FOR RANDOM FIELD ESTIMATION IN WIRELESS SENSOR NETWORKS"

Transcription

1 J Syst Sci Complex (2012) 25: SENSOR SELECTION FOR RNDOM FIELD ESTIMTION IN WIRELESS SENSOR NETWORKS Yang WENG Lihua XIE Wendong XIO DOI: /s Received: 10 May 2010 / Revised: 30 January 2011 c The Editorial Office of JSSC & Springer-Verlag Berlin Heidelberg 2012 bstract This paper studies the sensor selection problem for random field estimation in wireless sensor networs. The authors first prove that selecting a set of l sensors that minimize the estimation error under the D-optimal criterion is NP-complete. The authors propose an iterative algorithm to pursue a suboptimal solution. Furthermore, in order to improve the bandwidth and energy efficiency of the wireless sensor networs, the authors propose a best linear unbiased estimator for a Gaussian random field with quantized measurements and study the corresponding sensor selection problem. In the case of unnown covariance matrix, the authors propose an estimator for the covariance matrix using measurements and also analyze the sensitivity of this estimator. Simulation results show the good performance of the proposed algorithms. Key words BLUE, covariance matrix, exchange algorithm, NP-completeness, quantization, random field, sensor selection. 1 Introduction The developments in micro-electro-mechanical systems technology, wireless communications and digital electronics have made the deployment of low-cost wireless sensor networs (WSNs) in large scale with small size sensor nodes possible [1]. WSNs have been used to monitor various phenomena, such as the moisture content in an agricultural field, the temperature distribution in a building, the ph value in a river, and the salinity of sea water, etc. Usually the observed data in a WSN are spatially correlated with a covariance structure that may be modeled as a random field [2], which is a generalized stochastic process with a underlying parameter vector. In most applications, the WSN nodes are powered by small batteries which limit the lifetime of a WSN. Therefore, energy-efficient algorithms in WSNs are important. Yang WENG College of Mathematics, Sichuan University, Chengdu , China. wengyang@scu.edu.cn. Lihua XIE School of Electrical and Electronic Engineering, Nanyang Technological University, , Singapore. elhxie@ntu.edu.sg. Wendong XIO School of utomation and Electrical Engineering, University of Science and Technology Beijing, Beijing , China. wendongxiao68@gmail.com. This wor was supported by the National Natural Science Foundation of China-Key Program under Grant No and the National Natural Science Foundation of China under Grant No Part of this wor was presented at the 29th China Control Conference, Beijing, China, July This paper was recommended for publication by Editor Yiguang HONG.

2 SENSOR SELECTION FOR RNDOM FIELD ESTIMTION IN WSNs 47 It is desirable that only part of the sensor nodes are tased at any time without compromising the networ performance. Recently, many approaches for sensor selection have been proposed for estimation and detection. In [3], a utility is defined for each set of sensors, and the sensors are selected to maximize the utility while satisfying a given energy constraint. The sensor selection problem for estimation of a deterministic parameter has been studied in [4] via convex optimization using convex relaxation. heuristic method has been given for a subset selection. This method pursues not only a suboptimal choice of measurements, but also a bound on how well the global optimal choice performs. The sensor selection problem for event detection in WSN has been addressed in [5]. dimension reduction method has been introduced by selecting a subset of sensors that maximize the Kullbac-Leibler distance between the selected measurement distributions. However, the aforementioned results cannot be applied to the random field estimation problem in WSNs. So far, few wor has been done for the random field estimation in WSNs. In [6], the optimal sensor placement for the Gaussian process estimation problem is formulated as a problem of maximizing the mutual information between the chosen locations and other locations. This combinatorial problem is proved to be NP-complete, and a polynomial-time approximation greedy algorithm has been proposed. lthough the sensor placement for random field estimation is addressed, the Gaussian assumption limits the applications of this wor. In application scenarios of WSN, there are two issues that should be considered. The first is the wireless bandwidth shared among sensor nodes and the fusion center. Quantization has been viewed as a fundamental element to save bandwidth by reducing the amount of data to represent a signal [7 8]. Quantization has been well studied in digital signal processing and control where a signal with continuous values is quantized due to a finite word-length of microprocessor [9]. In WSNs, quantization is also necessary to reduce the energy consumption as communications cost most of the energy and the amount of energy consumed is related to the amount of data transmitted. The second issue is that the covariance structure of the random field is not nown a priori in many situations since sensor nodes are deployed in an unnown environment or deployed in a hostile region. Estimation of the covariance structure of a spatial process is a fundamental prerequisite for the design of a monitoring networ since an inaccurate estimate of the covariance matrix leads to poor estimation accuracy [10]. In this paper, the sensor selection problem for random field estimation is considered. The main contributions of this paper are in three aspects. First, we formulate the sensor selection for random field estimation as an optimization problem and proved the NP-completeness of this problem. Unlie other results on sensor selection for random field estimation (e.g., [6]), we do not assume the distribution of the random field. Second, we propose a heuristic algorithm to pursue the optimal solution of our proposed optimization problem. lthough those exchangetype algorithms were already used in statistics for regression problem, it is still not applied to the sensor selection problem. Third, we have considered the application scenarios for our sensor selection. We propose a best linear unbiased estimator (BLUE) for a Gaussian random field with quantized measurements and study the corresponding sensor selection problem. In the case of unnown covariance matrix, we propose an estimator for the covariance matrix using measurements and also analyze the sensitivity of this estimator. The rest of this paper is organized as follows. In Section 2, the formulation of the sensor selection problem in random field estimation in WSNs is derived. The complexity of the sensor selection problem for minimizing the determinant of the estimation error covariance matrix with a given number of sensors is analyzed in Section 3. heuristic algorithm for this sensor selection problem is proposed to pursue a suboptimal solution in Section 4. pplication scenarios of sensor selection for random field estimation in WSN with quantization and unnown covariance matrix are discussed in Section 5. Simulation results are reported in Section 6 to show the

3 48 YNG WENG LIHU XIE WENDONG XIO performance of our methods. Concluding remars are given in Section 7. 2 Problem Formulation 2.1 Random Field Model Denote the random field under discussion as {Z(s),s D}, whered is a compact set of Euclidean space and Z(s) is a random variable at s. There are n sensor nodes that are deployed in D with locations P = {s 1,s 2,,s n }. Each sensor can tae measurement of this field without noise. We wish to reconstruct this field by the observations from the distributed sensor nodes. In order to specify a random field, we denote the mean function for the field as and the covariance function as M(s) =E(Z(s)), s D (1) K(s, t) =cov(z(s),z(t)), s, t D. (2) We study a single-snapshot scenario in a WSN, in which each sensor node can tae only one measurement. For a given sensor node i at location s i, we denote its observation as Z(s i ). We can reconstruct the random field using the observations form n sensor nodes, namely, Z Pn =(Z(s 1 ),Z(s 2 ),,Z(s n )) T. We can use a simple riging predictor to reconstruct the value of Z(s) where this predictor at s Dcorresponds to the BLUE [11] Ẑ(s) =E(Z(s)) + Σ s,pn Σ 1 P n Z Pn with error variance: ε(s, P n )=Var(Z(s)) Σ s,pn Σ 1 P n Σ Pn,s, where Σ s,pn = cov(z(s),z Pn ), Σ Pn = cov(z Pn,Z Pn ), and E(Z(s)) and Var(Z(s)) are the expectation and variance of Z(s), respectively. Here, we do not restrict the field and its distribution to be stationary and isotropic. 2.2 Random Field Estimation in WSNs We want to choose the optimal sensor locations for a number of sensors to reconstruct the whole field under certain criteria. natural objective is to choose sensor locations to tae observations to minimize the distortion in the estimate of the whole field. However, as a random field is a multi-dimensional vector space or even a manifold, utilizing several observations to estimate random field is complicated. One convenient way is to consider the discrete case with finite random variables by mapping the space to a list of random variables. ssume that, by using n sensors, we can discretize the random field Z(s) into a sequence {Z(s 1 ),Z(s 2 ),,Z(s n )}, which is denoted as Z =(Z 1,Z 2,,Z n ) T (3) with an index set I = {1, 2,,n} and the corresponding covariance matrix Σ. We want to estimate Z with several observations from the locations where we can deploy the sensor nodes. Mathematically, the problem is that we have a random vector Z with dimension n to be estimated, we want to select l random variables from the vector to estimate the rest of the r (r = n l) variables in the sense of minimum error variance. We denote the selected vector by Z =(Z i1,z i2,,z il ) T,

4 SENSOR SELECTION FOR RNDOM FIELD ESTIMTION IN WSNs 49 and denote the vector to be estimated by Z Ā = Z\Z. (4) Without loss of generality, we further assume that M(s) =0, s D. The BLUE for the rest of the variables can be presented as with error covariance matrix Ẑ Ā = Σ Ā Σ 1 Z, (5) D Ā =cov(ẑā Z Ā )=Σ Ā Σ Ā Σ 1 Σ Ā, (6) where Σ =cov(z ), Σ Ā =cov(z Ā ), Σ Ā =cov(z Ā,Z ). In this paper, we always assume that covariance matrix Σ is invertible for an arbitrary random vector Z, I. For the random field estimation in WSNs, we can consider that each discretized location for the random field can be reached by a sensor node since a highly dense deployment is possible [1]. Therefore, our problem can be regarded as the sensor selection problem, which is closely related to the optimal experiment design [4,12] originally proposed by Wald [13] and extended by Kiefer [14]. Since Kiefer s seminal wor, a large amount of literature can be found for dealing with theoretical aspects of optimal design. Similar to the traditional experimental design theory, a monotonic function can be used to compare the efficiency of various variables of Z i1,z i2,,z il. The most widely used criterion for the experiment design problem is nown as the D-optimal [15]. Definition 1 The criterion is called D-optimal if the determinant of the error covariance matrix D Ā is minimized, i.e., Z =argmin D Ā, (7) I where is the determinant function. While there might not always exist a direct relation between energy efficiency and the number of active sensors, reducing the number of active sensors generally leads to less energy consumption. In WSNs, in order to conserve energy and prolong networ lifetime, it is necessary to select a group of sensor nodes for collecting observation data to estimate the field, while with other nodes inactive (sleeping). The goal of this paper is to propose an approach to obtain optimal estimation performance under the energy consumption constraint of WSN which is formulated as follows, where # is the cardinality of set. 3 NP-Completeness argmin Σ Ā Σ Ā Σ 1 Σ Ā, (8) :#=l Here we will show that the decision version of the optimization problem (8) is NP-complete. The decision version of our optimization problem is as follows. Definition 2 The decision version of the sensor selection for random field estimation problem: Instance: random variable set {Z 1,Z 2,,Z n } with an index set I = {1, 2,,n} and the corresponding covariance matrix Σ, positive integer l and given number M.

5 50 YNG WENG LIHU XIE WENDONG XIO Question: Is there a subset Isuch that, when # l, the determinant of estimation error covariance matrix given in (8) is at most M? Theorem 1 There is no polynomial time algorithm that solves the sensor selection for random field estimation problem (8), unlessp = NP. Proof First, the decision version of our problem is in NP: nondeterministic algorithm needs to guess a subset of I with less cardinality than l and chec in polynomial time that the determinant of estimation error covariance matrix D Ā is less than the given value M. Next, to prove that our problem (8) is NP-complete, we show that this problem contains an NP-complete problem as a special case. Using the determinant identity, B B T C = C B 1 B T, where the matrix is invertible, we can obtain Σ = Σ Σ Ā = Σ Σ Ā Σ Ā Σ 1 Σ Ā. (9) Σ Ā Σ Ā Since the covariance matrix D is invertible for arbitrary random vector Z, I. Recalling the error covariance matrix of Z Ā, according to Z in (6) and the determinant identity in (9), we can get Σ = Σ D Ā. s a result of the determinant of Σ is constant, the problem (8) is equivalent to argmax Σ. (10) :#=l Under the Gaussian assumption, the differential entropy of the multivariate normal distribution is [16] H() = 1 2 (l + l ln(2π)+ln Σ ), (11) where H( ) is the Shannon entropy. Thus, problem (10) is restricted to argmax H(). (12) :#=l The decision version of problem (12) is as follows. Under the Gaussian assumption, with a given number M and covariance matrix Σ of all sites, is there a subset Iof cardinality l such that H() M? This problem is proved to be NP-complete in [17]. Restrict to our problem (8) by allowing only instances having Gaussian distribution for all sites. Therefore, our problem contains an NP-complete problem as a special case. 4 Exchange lgorithm Since the formulated optimization problem (8) is proved to be NP-complete, the standard method of dealing with such combinatorial optimization problems is to propose a heuristic algorithm to approximate the optimal solution [18 19]. Our proposed iterative algorithm is similar to the Fedorov s exchange algorithm, which constructs a discrete D-optimal design for a regression problem [15,20]. Intuitively, in order to minimize the estimation error, we should select a most informative subset with pre-specified size from a set of correlated random variables. The

6 SENSOR SELECTION FOR RNDOM FIELD ESTIMTION IN WSNs 51 basic strategy for the exchange algorithm is a swap operation at each iteration, where one sensor node with the least contribution to the estimation performance will be deleted from the chosen sensors {i 1,i 2,,i l }, and another sensor with the largest improvement to the estimation performance will be added at each step. The iterations will continue until no improvement can be made by swapping. t the -th iteration of our algorithm, the current set of active sensor nodes is denoted as = {i 1,i 2,,i l }, and the sleeping nodes which need to be estimated is denoted as Ā = I\. Theorem 2 Given the current set of active sensor nodes = {i 1,i 2,,i l } at the -th step, a swap consists of two stages, i.e., a deletion stage and an addition stage. t the deletion stage, node i with the least contribution will be deleted from the set of active nodes. Node i is the one with minimum error variance estimated by other active sensor nodes, i.e., i =argmin j D j ( j ), (13) where D j ( j ) denotes the error variance of Z j estimated by the set j = \{j}. The candidate nodes for the next stage can be denoted as Ā+ = Ā {i }. t the addition stage, node i + with most contribution will be activated to form the set of active nodes +1.Nodei + is the one with maximum error variance estimated by i, i.e., i + =argmax j Ā+ D j ( i ). (14) Proof t the deletion stage, following the proof of Theorem 1, we now that Z =argmin I D Ā =argmax I ccording to the determinant identity (9), we have Σ Σ = j Σ j,j Σ j, j Σ jj = Σ j Σ jj Σ j, j where Σ j, j Σ 1 j Σ., Σ j,j =cov(z j,z j ), Σ jj =cov(z j ). Therefore, for the given, the node with the minimum error variance estimated by other active sensor nodes is the least contribution one to the estimation performance, i.e., i = Z =argmind j ( j ). j t the addition stage, we assume that node j will be added to the set of current active nodes i, Σ j + = = Σ i Σ j, i Σ i Σ i,j Σ jj Σ jj Σ j, i Σ 1 i. Σ i,j

7 52 YNG WENG LIHU XIE WENDONG XIO fter the deletion stage at this step, the set i j Ā+,wehave is given. Therefore, for an arbitrary node i + =argmax j Ā+ =argmax j Ā+ =argmax j Ā+ =argmax j Ā+ Σ j + Σ i Σ jj Σ j, i Σ jj Σ j, i D j ( i ), Σ 1 i Σ 1 i Σ i,j Σ i,j which ends the proof of this theorem. Remar 1 t the deletion stage of each iteration, the node best explained by the other active nodes will be deleted from the set of active nodes; at the addition stage, the activated node is the worst explained node by the observations made at the activated nodes in the previous steps, i.e., the node with worst estimation performance estimated by the active set. The iterative algorithm can give better results if multiple nodes can be deleted and added in both stages, while with a more expensive computation cost. ccording to Theorem 2, we present a centralized framewor for sampling and estimation under a certain distortion constraint. Suppose that a fusion center (or a sin) is deployed in the networs, all the sensor nodes can transmit the observations to the fusion center. The fusion center nows the exact position of each sensor node and the covariance matrix for all sensor nodes. Starting with l arbitrary nodes active, the fusion center will employ sensor selection by swapping one of the current active sensors with one of the sleeping sensors iteratively. The iterative algorithm will be terminated until no improvement can be made by swapping. The centralized approximation algorithm is summarized in lgorithm 1. lgorithm 1: Centralized sensor selection scheme 1) Start with =0andl arbitrary nodes 0 active; 2) While Σ > Σ 1 do a) Delete the node i from the set of active sensor nodes with the least contribution to the estimation performance according to (13), i.e., i =argmin j D j ( j ); b) dd node i + to the set of active nodes with the most contribution to the estimation performance according to (14), i.e., i + =argmax j Ā+ D j ( i ); c) Fusion center updates the set of active sensor node to +1 and = +1; 3) end while.

8 SENSOR SELECTION FOR RNDOM FIELD ESTIMTION IN WSNs 53 5 pplication Scenarios Two application scenarios are investigated in this section. First, the sensor nodes can transmit the measurements to fusion center with infinite bandwidth of communication. However, the communication bandwidth is always limited for each node with small size and low energy in WSNs. We should propose an estimation approach at fusion center to get the BLUE for the unselected sites according to the quantized measurements of the selected sites. Second, in the case of a WSN is deployed in an unnown environment, the priori covariance matrix is not available in many situations. We can get an approximated BLUE for the unselected sites resort to the estimated covariance matrix. The sensitivity property of this approximated BLUE should be considered. 5.1 Estimation with Quantized Measurements In this subsection, we consider the random field estimation with quantized measurements in WSNs. Suppose the sensor nodes can quantize the measurements before transmitting them to the fusion center. The estimation at the fusion center can only be based on the quantized measurements which can be treated as the compressed measurements from the local sensors. In this section, we consider a fixed quantizer and the channel can transmit messages of log 2 N bits without error, where N 2 is the number of quantization levels. Without loss of generality, we assume that the random field Z(s) has zero mean. The selected sensor node quantizes its measurements Z i to obtain an output Y i, which does not depend on the information from other nodes. Each Y i is then transmitted to the fusion center. Following the previous notation in (4), the measurements taen by the selected sites are denoted as Z =(Z i1,z i2,,z il ) T. Denote the quantizer outputs of Z as Y = (Y 1,Y 2,,Y l ) T and the sites to be estimated are denoted as Z Ā = Z\Z.TheBLUEforZ Ā according to measurements Y is with error covariance matrix Ž Ā = K Y = E(Z Ā Y T )[E(Y Y T )] 1 Y (15) Ď Ā =cov ( Ž Ā Z Ā ) = ΣĀ K E(Y Y T ) KT. (16) Denote the given quantizer as Q( ). Whenever the input sample falls into the quantization interval B =[d,d +1 ], the quantizer output is y. The ij-th element of Y s covariance matrix can be calculated as follows where [E(Y Y T )] ij = E(Y i Y j ) N N = y i y j P {Z i B i,z j B j }, (17) i=1 P {Z i B i,z j B j } = B i B j p(x 1,x 2 )dx 1 dx 2 and p(x 1,x 2 ) is the joint probability density function of Y i and Y j. For a Gaussian random field, the optimal quantization level which means the minimum distortion with quantization can be chosen according to [21].

9 54 YNG WENG LIHU XIE WENDONG XIO Furthermore, the ij-th element of covariance matrix E(Z Ā Y T ) can be calculated as [E(Z Ā Y T )] ij = E(Z i Y j ) N = E(Z i Z j B j )y j P {Z j B j }. (18) i=1 For a stationary Gaussian process the expectation of Z i conditioned on Z j B j can be calculated as E(Z i Z j B j )= E(Z iz j )E(Z j Z j B j ) ΣZ 2, where ΣZ 2 is the variance of each site in the field. ccording to (17) and (18), the fusion center can calculate the error covariance when it receives the quantized measurements from the sensor nodes. Therefore, the exchange algorithm proposed in Section 4 can be implemented for the quantization case. 5.2 Estimation with Unnown Covariance Matrix ll results proposed in the previous sections have essentially utilized the fact that the covariance matrix Σ is nown. However, when a WSN is composed of a large number of sensor nodes which are densely deployed in an unnown environment, the priori covariance matrix is probably not available. Having repeated observations at every deployed node site of {s 1,s 2,,s n } one can directly estimate Σ. Suppose each sensor can tae m measurements, and the measurement taen of site s i at time j is denoted as z ij. The traditional estimator for covariance matrix is the sample covariance matrix (SCM), which is an unbiased and efficient estimator given by Σ = 1 m m Z j Z T j, (19) where Z j =(z 1j,z 2j,,z nj ) T is the measurements vector of all sites at time j, and{z ij },j = 1, 2,,m are i.i.d. from random variable Z i. Without loss of generality, we still assume random field with zero mean. Intuitively, we should replace the covariance matrix in (5) and (6) with the corresponding SCM. In this subsection, we first analyze why such a substitution maes sense, then we will analyze the sensitivity property of the estimator with estimated covariance matrix. The BLUE Z Ā by the selected sites set Z is obtained by minimizing the following square ris matrix V (C) =E[(Z Ā CZ )(Z Ā CZ ) T ]. Direct minimization of V (C) gives the optimal C as follows in BLUE sense C = Σ Ā Σ 1. If m measurements of Z are available, a reasonable approximate of V (C) is Ṽ (C) = 1 m m (Z Āj CZ j )(Z Āj CZ j ) T, where Z j and Z Āj are the measurements for selected nodes set and unselected nodes set Ā at time j, respectively. Since matrix Ṽ (C) is not a definite matrix, we minimize its trace for

10 SENSOR SELECTION FOR RNDOM FIELD ESTIMTION IN WSNs 55 instead, C =argmin C =argmin C =argmin C =argmin C tr(ṽ (C)) [ 1 m ] tr (Z m Āj CZ j )(Z Āj CZ j ) T m (Z Āj CZ j ) T (Z Āj CZ j ) m (Z T Āj ZĀj 2Zj T CT Z Āj + Zj T CT CZ j ), where tr( ) is the trace function. Differentiating Ṽ (C) with respect to C, wehave Ṽ (C) ( m C =2C ) m Z j Zj T 2 Z Āj Zj. T Hence, ( m C =(Z Āj Zj) T Z j Zj T = ( )( 1 1 m ZĀj Zj T m = Σ 1 Ā Σ, ) 1 m Z j Zj T where Σ Ā and Σ denote the corresponding SCM. The substitution of real covariance matrix to SCM leads to an approximated BLUE estimator with the approximated error covariance matrix: ) 1 Z Ā = Σ 1 Ā Σ Z (20) D Ā =cov(z Ā Z Ā )= Σ Ā Σ 1 Ā Σ Σ Ā. (21) The exchange algorithm proposed in Section 4 can be implemented by exploiting this estimated covariance matrix. lthough the SCM is an unbiased and efficient estimator, deviation always exists when only finite observations are available. The sensitivity problem of the approximated BLUE of (20) arises. We formulate the deviation between the real covariance matrix and SCM as a perturbation Δ in the following ( Σ Σ Ā Σ = Σ Ā Σ Ā ) ( ) Σ +Δ = 1 Σ Ā +Δ 12 = Σ +Δ, Σ Ā +Δ 21 Σ Ā +Δ 2 where Δ 12 =Δ T 21. The deviation of BLUE and approximated BLUE is investigated in the following theorem. Theorem 3 ssume Σ and Σ +Δ 1 are nonsingular. In addition, the perturbations for each bloc in Σ satisfy Δ 1 ε 1 Σ, Δ 2 ε 2 Σ Ā, Δ 12 = Δ 12 ε 3 Σ Ā =

11 56 YNG WENG LIHU XIE WENDONG XIO ε 3 Σ Ā and Σ 1 Δ 1 < 1. The deviation of BLUE and approximated BLUE can be bounded as D Ā D Ā ε 2 L 2 + ε2 3 +2ε 3 κε 1 1 κε 1 L 1, where is the Euclidean norm, L 1 = Σ Ā Σ 1 Σ Ā, L 2 = Σ Ā and κ = Σ 1 Σ. Proof We consider the deviation of the corresponding error covariance matrices D Ā and D Ā, D Ā D Ā =( Σ Ā Σ Ā Σ 1 Σ Ā) (Σ Ā Σ Ā Σ 1 Σ Ā) =Δ 2 (Σ Ā +Δ 21 )(Σ +Δ 1 ) 1 (Σ Ā +Δ 12 )+Σ Ā Σ 1 =Δ 2 (Σ Ā +Δ 21 )[(Σ +Δ 1 ) 1 Σ 1 ](Σ Ā +Δ 12 ) +Δ 21 Σ 1 Σ Ā +Δ 21 Σ 1 Δ 12 + Σ Ā Σ 1 Δ 12. Σ Ā ccording to the results on the inverses of perturbations for nonsingular matrices [22], Thus, Σ +Δ 1 ) 1 Σ 1 Σ Σ 1 Δ 1 (1 Σ 1 ΔΣ ) Σ. (22) D Ā D Ā Δ 2 + (Σ Ā +Δ 21 )[(Σ +Δ 1 ) 1 Σ 1 ](Σ Ā +Δ 12 ) + Δ 21 Σ 1 Σ Ā + Δ 21 Σ 1 Δ 12 + Σ Ā Σ 1 Δ 12 ε 2 Σ Ā +(1+ε 3 ) 2 Σ Σ 1 Σ Ā Δ 1 (1 Σ 1 ΔΣ ) Σ Σ Ā +2ε 3 Σ Ā Σ 1 Σ Ā + ε 2 3 ΣĀ Σ 1 Σ Ā ε 2 L 2 + ε2 3 +2ε 3 κε 1 L 1. 1 κε 1 Remar 2 This theorem shows that when ε i,i=1, 2, 3 are sufficiently small, the approximated BLUE using SCM is close to the one using real covariance matrix. 6 Simulations In this section, we present simulation experiments to illustrate the effectiveness of our proposed algorithm. We randomly generate n = 100 sensor nodes in a square unit. Figure 1 shows an example distribution of sensor nodes. The covariance matrix Σ = {Σ ij } n n for the random field is randomly generated accordingtothespatialmodel { Σ 2 Σ ij = i, i = j, Σ i Σ j exp( αd 2 ij ), i j, (23) where the scaling constant α =0.5 measures the intensity of correlation between two nodes [11]. {Σ i } are generated independently from the normal distribution with mean zero and variance 10. In the first simulation, we select 10 sensors to estimate the rest sites. Under the D-optimal criterion, the distortion for the estimation of the random field will be minimized while the

12 SENSOR SELECTION FOR RNDOM FIELD ESTIMTION IN WSNs Figure 1 n example node distribution in a square area determinant of covariance matrix for selected nodes is maximized. The estimation performance during the iterative process is shown in Figure 2. The set of active nodes is initialized by starting with randomly chosen 10 nodes. For comparison, the initial set of active nodes in both raw and quantized measurements cases are the same. The raw measurements are quantized into 4-bits before transmitting to the fusion center. Here, we use the optimal quantizer for Gaussian distribution designed in [21]. 3 x 104 determinant of covariance matrix for selected sites Raw measurments Quantized measurements iterative step Figure 2 Exchange algorithm for both raw and quantized measurements In the second simulation we show the estimation performance improves with increase of the active sensor nodes. Figure 3 shows the increase of Σ which means the estimation error for the random field decreases. The estimation performance for quantized measurements is quite satisfactory with a moderate number of bits transmitted to the fusion center in both the simulations.

13 58 YNG WENG LIHU XIE WENDONG XIO 4 x 106 determinant of covariance matrix for selected sensors Raw measurements Quantized measurements number of sensors selected Figure 3 Estimation performance with different number of active nodes In the third simulation, we show that the sensitivity property of approximated BLUE which uses the SCM instead of the real covariance matrices. Figure 4 gives a comparison of iterations for BLUE and approximated BLUE with different number of measurements. The iterations of the exchange algorithms have a similar tendency for the three estimators. Both the first and This simulation show that the proposed exchange algorithm will be terminated in about 10 steps. ctually, we investigate the number of iteration steps for our proposed exchange algorithm by 500 Monte Carlo simulations. The result shows that the number of iteration steps will be terminated in dozens of steps. determinant of covariance matrix for selected sites estimated covariance with 20 measurements estimated covariance with 30 measurements nown covariance matrix iterative step Figure 4 Comparison of exchange algorithm for BLUE and approximated BLUE 7 Conclusion In this paper, we have formulated the random field estimation problem in WSN as a sensor selection problem. The combinatorial optimization problem of choosing sensor nodes to min-

14 SENSOR SELECTION FOR RNDOM FIELD ESTIMTION IN WSNs 59 imize the estimation error with a given number of sensors was proved to be NP-complete. heuristic algorithm has been proposed to pursue a suboptimal solution for the sensor selection problem under the D-optimal criterion. Furthermore, we have considered the application scenarios of sensor selection for random field estimation of a Gaussian random field with quantized measurements as well as a random field with unnown covariance matrix. Simulation results show the good performance of our proposed algorithms. References [1] I. F. yildiz, W. Su, Y. Sanarasubramaniam, and E. Cayirci, Wireless sensor networs: survey, Computer Networs, 2002, 38: [2] J. E. Besag, Spatial interaction and the statistical analysis of lattice systems, J. Royal Statiscal Soc., Ser. B, 1974, 32(2): [3] F. Bian, D. Kempe, and R. Govindan, Utility based sensor selection, Proceeding of IPSN 06, Nashville, Tennessee, [4] S. Joshi and S. Boyd, Sensor Selection via Convex Optimization, IEEE Transactions on Signal Processing, 2009, 57(2): [5] D. Bajovic, B. Sinopoli, and J. Xavier, Sensor selection for hypothesis testing in wireless sensor networs: Kullbac-Leibler based approach, Proceeding of CDC 09, Shanghai, [6]. Krause,. Singh, and C. Guestrin, Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical Studies, The Journal of Machine Learning Research, 2008, 9: [7] Z. Q. Luo, Universal decentralized estimation in a bandwidth constrained sensor networ, IEEE Transactions on Information Theory, 2005, 51(6): [8] K. You, L. Xie, S. Sun, and W. Xiao, Multiple-level quantized innovation Kalman filter, Proc. 17th IFC World Congress, Korea, [9] D. Williamson, Digital Control and Implementation, Prentice-Hall, [10] P. D. Sampson and P. Guttorp, Nonparametric estimation of non-stationary spatial covariance structure, Journal of the merican Statistical ssociation, 1992, 87(417): [11] N.. Cressie, Statistics for Spatial Data, Revised Edition, Wiley, New Yor, [12] F. Puelsheim, Optimal Design of Experiments, Society for Industrial & pplied Mathematics, [13]. Wald, On the efficient design of statistical investigations, nn. Math. Stat., 1943, 14: [14] J. Kiefer and J. Wolfowitz, Optimum designs in regression problems, nn. Math. Stat., 1959, 30: [15] N. K. Nguyen and. J. Miller, review of some exchange algorithms for constructing discrete D-optimal designs, Computational Statistics & Data nalysisl, 1981, 14: [16] N.. hmed and D. V. Gohale, Entropy expressions and their estimators for multivariate distributions, IEEE Transactions on Information Theory, 1989, 35(3): [17] C. W. Ko and J. Lee, n exact algorithm for maximum entropy sampling, Operations Research, 1995, 43(4): [18] R. Cristescu, B. Beferull-Lozano, M. Vetterli, and R. Wattenhofer, Networ correlated data gathering with explicit communication: NP-completeness and algorithms, IEEE/CM Transactions on Networing, 2006, 14(1): [19] I. D. Schizas, G. B. Giannai, and Z. Q. Luo, Distributed estimation using reduced-dimensionality sensor observations, IEEE Transactions on Signal Processing, 2007, 55(8): [20] V. Fedorov, Theory of Optimal Experiments, cademic, New Yor, [21] J. Max, Quantizing for minimum distortion, IRE Transactions on Information Theory, 1960, 6(1): [22] G. M. Stewart, On the perturbation of pseudo-inverses, projections and linear least squares problems, SIM Review, 1977, 19(4):

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,

More information

Optimal matching in wireless sensor networks

Optimal matching in wireless sensor networks Optimal matching in wireless sensor networks A. Roumy, D. Gesbert INRIA-IRISA, Rennes, France. Institute Eurecom, Sophia Antipolis, France. Abstract We investigate the design of a wireless sensor network

More information

Asymptotically Optimal and Bandwith-efficient Decentralized Detection

Asymptotically Optimal and Bandwith-efficient Decentralized Detection Asymptotically Optimal and Bandwith-efficient Decentralized Detection Yasin Yılmaz and Xiaodong Wang Electrical Engineering Department, Columbia University New Yor, NY 10027 Email: yasin,wangx@ee.columbia.edu

More information

Transmission Schemes for Lifetime Maximization in Wireless Sensor Networks: Uncorrelated Source Observations

Transmission Schemes for Lifetime Maximization in Wireless Sensor Networks: Uncorrelated Source Observations Transmission Schemes for Lifetime Maximization in Wireless Sensor Networks: Uncorrelated Source Observations Xiaolu Zhang, Meixia Tao and Chun Sum Ng Department of Electrical and Computer Engineering National

More information

Diffusion LMS Algorithms for Sensor Networks over Non-ideal Inter-sensor Wireless Channels

Diffusion LMS Algorithms for Sensor Networks over Non-ideal Inter-sensor Wireless Channels Diffusion LMS Algorithms for Sensor Networs over Non-ideal Inter-sensor Wireless Channels Reza Abdolee and Benoit Champagne Electrical and Computer Engineering McGill University 3480 University Street

More information

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Hyperplane-Based Vector Quantization for Distributed Estimation in Wireless Sensor Networks Jun Fang, Member, IEEE, and Hongbin

More information

Gaussian Mixture Distance for Information Retrieval

Gaussian Mixture Distance for Information Retrieval Gaussian Mixture Distance for Information Retrieval X.Q. Li and I. King fxqli, ingg@cse.cuh.edu.h Department of omputer Science & Engineering The hinese University of Hong Kong Shatin, New Territories,

More information

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS Parvathinathan Venkitasubramaniam, Gökhan Mergen, Lang Tong and Ananthram Swami ABSTRACT We study the problem of quantization for

More information

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks Ji an Luo 2008.6.6 Outline Background Problem Statement Main Results Simulation Study Conclusion Background Wireless

More information

Optimal Linear Estimation Fusion Part VI: Sensor Data Compression

Optimal Linear Estimation Fusion Part VI: Sensor Data Compression Optimal Linear Estimation Fusion Part VI: Sensor Data Compression Keshu Zhang X. Rong Li Peng Zhang Department of Electrical Engineering, University of New Orleans, New Orleans, L 70148 Phone: 504-280-7416,

More information

EUSIPCO

EUSIPCO EUSIPCO 3 569736677 FULLY ISTRIBUTE SIGNAL ETECTION: APPLICATION TO COGNITIVE RAIO Franc Iutzeler Philippe Ciblat Telecom ParisTech, 46 rue Barrault 753 Paris, France email: firstnamelastname@telecom-paristechfr

More information

Optimal Sensor Rules and Unified Fusion Rules for Multisensor Multi-hypothesis Network Decision Systems with Fading Channels

Optimal Sensor Rules and Unified Fusion Rules for Multisensor Multi-hypothesis Network Decision Systems with Fading Channels Optimal Sensor Rules and Unified Fusion Rules for Multisensor Multi-hypothesis Network Decision Systems with Fading Channels Qing an Ren Yunmin Zhu Dept. of Mathematics Sichuan University Sichuan, China

More information

Maximum Likelihood Diffusive Source Localization Based on Binary Observations

Maximum Likelihood Diffusive Source Localization Based on Binary Observations Maximum Lielihood Diffusive Source Localization Based on Binary Observations Yoav Levinboo and an F. Wong Wireless Information Networing Group, University of Florida Gainesville, Florida 32611-6130, USA

More information

certain class of distributions, any SFQ can be expressed as a set of thresholds on the sufficient statistic. For distributions

certain class of distributions, any SFQ can be expressed as a set of thresholds on the sufficient statistic. For distributions Score-Function Quantization for Distributed Estimation Parvathinathan Venkitasubramaniam and Lang Tong School of Electrical and Computer Engineering Cornell University Ithaca, NY 4853 Email: {pv45, lt35}@cornell.edu

More information

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p.

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. Preface p. xiii Acknowledgment p. xix Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. 4 Bayes Decision p. 5

More information

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate www.scichina.com info.scichina.com www.springerlin.com Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate WEI Chen & CHEN ZongJi School of Automation

More information

Adaptive Sensor Selection in Sequential Hypothesis Testing

Adaptive Sensor Selection in Sequential Hypothesis Testing Adaptive Sensor Selection in Sequential Hypothesis Testing Vaibhav Srivastava Kurt Plarre Francesco Bullo Abstract We consider the problem of sensor selection for time-optimal detection of a hypothesis.

More information

Diffusion based Projection Method for Distributed Source Localization in Wireless Sensor Networks

Diffusion based Projection Method for Distributed Source Localization in Wireless Sensor Networks The Third International Workshop on Wireless Sensor, Actuator and Robot Networks Diffusion based Projection Method for Distributed Source Localization in Wireless Sensor Networks Wei Meng 1, Wendong Xiao,

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel.

UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN. EE & CS, The University of Newcastle, Australia EE, Technion, Israel. UTILIZING PRIOR KNOWLEDGE IN ROBUST OPTIMAL EXPERIMENT DESIGN Graham C. Goodwin James S. Welsh Arie Feuer Milan Depich EE & CS, The University of Newcastle, Australia 38. EE, Technion, Israel. Abstract:

More information

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE

More information

GAUSSIAN PROCESS TRANSFORMS

GAUSSIAN PROCESS TRANSFORMS GAUSSIAN PROCESS TRANSFORMS Philip A. Chou Ricardo L. de Queiroz Microsoft Research, Redmond, WA, USA pachou@microsoft.com) Computer Science Department, Universidade de Brasilia, Brasilia, Brazil queiroz@ieee.org)

More information

Incorporating Track Uncertainty into the OSPA Metric

Incorporating Track Uncertainty into the OSPA Metric 14th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 211 Incorporating Trac Uncertainty into the OSPA Metric Sharad Nagappa School of EPS Heriot Watt University Edinburgh,

More information

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN

AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN AN INFORMATION THEORY APPROACH TO WIRELESS SENSOR NETWORK DESIGN A Thesis Presented to The Academic Faculty by Bryan Larish In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

More information

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM. Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification

More information

sine wave fit algorithm

sine wave fit algorithm TECHNICAL REPORT IR-S3-SB-9 1 Properties of the IEEE-STD-57 four parameter sine wave fit algorithm Peter Händel, Senior Member, IEEE Abstract The IEEE Standard 57 (IEEE-STD-57) provides algorithms for

More information

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London Distributed Data Fusion with Kalman Filters Simon Julier Computer Science Department University College London S.Julier@cs.ucl.ac.uk Structure of Talk Motivation Kalman Filters Double Counting Optimal

More information

Covariance function estimation in Gaussian process regression

Covariance function estimation in Gaussian process regression Covariance function estimation in Gaussian process regression François Bachoc Department of Statistics and Operations Research, University of Vienna WU Research Seminar - May 2015 François Bachoc Gaussian

More information

Surrogate loss functions, divergences and decentralized detection

Surrogate loss functions, divergences and decentralized detection Surrogate loss functions, divergences and decentralized detection XuanLong Nguyen Department of Electrical Engineering and Computer Science U.C. Berkeley Advisors: Michael Jordan & Martin Wainwright 1

More information

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department

More information

Optimal path planning using Cross-Entropy method

Optimal path planning using Cross-Entropy method Optimal path planning using Cross-Entropy method F Celeste, FDambreville CEP/Dept of Geomatics Imagery Perception 9 Arcueil France {francisceleste, fredericdambreville}@etcafr J-P Le Cadre IRISA/CNRS Campus

More information

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O. SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM Neal Patwari and Alfred O. Hero III Department of Electrical Engineering & Computer Science University of

More information

NP-hardness of the stable matrix in unit interval family problem in discrete time

NP-hardness of the stable matrix in unit interval family problem in discrete time Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and

More information

Gaussian processes. Basic Properties VAG002-

Gaussian processes. Basic Properties VAG002- Gaussian processes The class of Gaussian processes is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space, or time and space. The popularity

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

Recursive Least Squares for an Entropy Regularized MSE Cost Function

Recursive Least Squares for an Entropy Regularized MSE Cost Function Recursive Least Squares for an Entropy Regularized MSE Cost Function Deniz Erdogmus, Yadunandana N. Rao, Jose C. Principe Oscar Fontenla-Romero, Amparo Alonso-Betanzos Electrical Eng. Dept., University

More information

Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations

Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations SEBASTIÁN OSSANDÓN Pontificia Universidad Católica de Valparaíso Instituto de Matemáticas Blanco Viel 596, Cerro Barón,

More information

Recursive Generalized Eigendecomposition for Independent Component Analysis

Recursive Generalized Eigendecomposition for Independent Component Analysis Recursive Generalized Eigendecomposition for Independent Component Analysis Umut Ozertem 1, Deniz Erdogmus 1,, ian Lan 1 CSEE Department, OGI, Oregon Health & Science University, Portland, OR, USA. {ozertemu,deniz}@csee.ogi.edu

More information

A Matrix Theoretic Derivation of the Kalman Filter

A Matrix Theoretic Derivation of the Kalman Filter A Matrix Theoretic Derivation of the Kalman Filter 4 September 2008 Abstract This paper presents a matrix-theoretic derivation of the Kalman filter that is accessible to students with a strong grounding

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

MODULE -4 BAYEIAN LEARNING

MODULE -4 BAYEIAN LEARNING MODULE -4 BAYEIAN LEARNING CONTENT Introduction Bayes theorem Bayes theorem and concept learning Maximum likelihood and Least Squared Error Hypothesis Maximum likelihood Hypotheses for predicting probabilities

More information

OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN

OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN Dynamic Systems and Applications 16 (2007) 393-406 OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN College of Mathematics and Computer

More information

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O.

SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM. Neal Patwari and Alfred O. SIGNAL STRENGTH LOCALIZATION BOUNDS IN AD HOC & SENSOR NETWORKS WHEN TRANSMIT POWERS ARE RANDOM Neal Patwari and Alfred O. Hero III Department of Electrical Engineering & Computer Science University of

More information

بسم الله الرحمن الرحيم

بسم الله الرحمن الرحيم بسم الله الرحمن الرحيم Reliability Improvement of Distributed Detection in Clustered Wireless Sensor Networks 1 RELIABILITY IMPROVEMENT OF DISTRIBUTED DETECTION IN CLUSTERED WIRELESS SENSOR NETWORKS PH.D.

More information

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES

ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES ROBUST BLIND CALIBRATION VIA TOTAL LEAST SQUARES John Lipor Laura Balzano University of Michigan, Ann Arbor Department of Electrical and Computer Engineering {lipor,girasole}@umich.edu ABSTRACT This paper

More information

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes (bilmes@cs.berkeley.edu) International Computer Science Institute

More information

Analysis of Random Radar Networks

Analysis of Random Radar Networks Analysis of Random Radar Networks Rani Daher, Ravira Adve Department of Electrical and Computer Engineering, University of Toronto 1 King s College Road, Toronto, ON M5S3G4 Email: rani.daher@utoronto.ca,

More information

arxiv: v1 [cs.sy] 9 Apr 2017

arxiv: v1 [cs.sy] 9 Apr 2017 Quantized Innovations Bayesian Filtering Chun-Chia HuangRobert R.Bitmead a Department of Mechanical & Aerospace Engineering University of California San Diego 9 Gilman Drive La Jolla CA 99- USA. arxiv:7.6v

More information

IN recent years, the problems of sparse signal recovery

IN recent years, the problems of sparse signal recovery IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 1, NO. 2, APRIL 2014 149 Distributed Sparse Signal Estimation in Sensor Networs Using H -Consensus Filtering Haiyang Yu Yisha Liu Wei Wang Abstract This paper

More information

Optimization. The value x is called a maximizer of f and is written argmax X f. g(λx + (1 λ)y) < λg(x) + (1 λ)g(y) 0 < λ < 1; x, y X.

Optimization. The value x is called a maximizer of f and is written argmax X f. g(λx + (1 λ)y) < λg(x) + (1 λ)g(y) 0 < λ < 1; x, y X. Optimization Background: Problem: given a function f(x) defined on X, find x such that f(x ) f(x) for all x X. The value x is called a maximizer of f and is written argmax X f. In general, argmax X f may

More information

Decentralized Detection In Wireless Sensor Networks

Decentralized Detection In Wireless Sensor Networks Decentralized Detection In Wireless Sensor Networks Milad Kharratzadeh Department of Electrical & Computer Engineering McGill University Montreal, Canada April 2011 Statistical Detection and Estimation

More information

Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors

Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors César San Martin Sergio Torres Abstract In this paper, a model-based correlation measure between

More information

Decentralized decision making with spatially distributed data

Decentralized decision making with spatially distributed data Decentralized decision making with spatially distributed data XuanLong Nguyen Department of Statistics University of Michigan Acknowledgement: Michael Jordan, Martin Wainwright, Ram Rajagopal, Pravin Varaiya

More information

Latent Variable Models and EM algorithm

Latent Variable Models and EM algorithm Latent Variable Models and EM algorithm SC4/SM4 Data Mining and Machine Learning, Hilary Term 2017 Dino Sejdinovic 3.1 Clustering and Mixture Modelling K-means and hierarchical clustering are non-probabilistic

More information

F denotes cumulative density. denotes probability density function; (.)

F denotes cumulative density. denotes probability density function; (.) BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models

More information

Distributed estimation in sensor networks

Distributed estimation in sensor networks in sensor networks A. Benavoli Dpt. di Sistemi e Informatica Università di Firenze, Italy. e-mail: benavoli@dsi.unifi.it Outline 1 An introduction to 2 3 An introduction to An introduction to In recent

More information

Correcting Bursty and Localized Deletions Using Guess & Check Codes

Correcting Bursty and Localized Deletions Using Guess & Check Codes Correcting Bursty and Localized Deletions Using Guess & Chec Codes Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University serge..hanna@rutgers.edu, salim.elrouayheb@rutgers.edu Abstract

More information

Decentralized Detection in Sensor Networks

Decentralized Detection in Sensor Networks IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 51, NO 2, FEBRUARY 2003 407 Decentralized Detection in Sensor Networks Jean-François Chamberland, Student Member, IEEE, and Venugopal V Veeravalli, Senior Member,

More information

Distributed Estimation and Detection for Sensor Networks Using Hidden Markov Random Field Models

Distributed Estimation and Detection for Sensor Networks Using Hidden Markov Random Field Models Electrical and Computer Engineering Publications Electrical and Computer Engineering 8-26 Distributed Estimation and Detection for Sensor Networs Using Hidden Marov Random Field Models Alesandar Dogandžić

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

The official electronic file of this thesis or dissertation is maintained by the University Libraries on behalf of The Graduate School at Stony Brook

The official electronic file of this thesis or dissertation is maintained by the University Libraries on behalf of The Graduate School at Stony Brook Stony Brook University The official electronic file of this thesis or dissertation is maintained by the University Libraries on behalf of The Graduate School at Stony Brook University. Alll Rigghht tss

More information

Stochastic Design Criteria in Linear Models

Stochastic Design Criteria in Linear Models AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

CLOSE-TO-CLEAN REGULARIZATION RELATES

CLOSE-TO-CLEAN REGULARIZATION RELATES Worshop trac - ICLR 016 CLOSE-TO-CLEAN REGULARIZATION RELATES VIRTUAL ADVERSARIAL TRAINING, LADDER NETWORKS AND OTHERS Mudassar Abbas, Jyri Kivinen, Tapani Raio Department of Computer Science, School of

More information

V. Tzoumas, A. Jadbabaie, G. J. Pappas

V. Tzoumas, A. Jadbabaie, G. J. Pappas 2016 American Control Conference ACC Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Sensor Placement for Optimal Kalman Filtering: Fundamental imits, Submodularity, and Algorithms V. Tzoumas,

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Optimization of Multistatic Cloud Radar with Multiple-Access Wireless Backhaul

Optimization of Multistatic Cloud Radar with Multiple-Access Wireless Backhaul 1 Optimization of Multistatic Cloud Radar with Multiple-Access Wireless Backhaul Seongah Jeong, Osvaldo Simeone, Alexander Haimovich, Joonhyuk Kang Department of Electrical Engineering, KAIST, Daejeon,

More information

The Regularized EM Algorithm

The Regularized EM Algorithm The Regularized EM Algorithm Haifeng Li Department of Computer Science University of California Riverside, CA 92521 hli@cs.ucr.edu Keshu Zhang Human Interaction Research Lab Motorola, Inc. Tempe, AZ 85282

More information

NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS

NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS Control 4, University of Bath, UK, September 4 ID-83 NON-LINEAR CONTROL OF OUTPUT PROBABILITY DENSITY FUNCTION FOR LINEAR ARMAX SYSTEMS H. Yue, H. Wang Control Systems Centre, University of Manchester

More information

Distributed Stochastic Optimization in Networks with Low Informational Exchange

Distributed Stochastic Optimization in Networks with Low Informational Exchange Distributed Stochastic Optimization in Networs with Low Informational Exchange Wenjie Li and Mohamad Assaad, Senior Member, IEEE arxiv:80790v [csit] 30 Jul 08 Abstract We consider a distributed stochastic

More information

Distributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security

Distributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security Distributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security Edmond Nurellari The University of Leeds, UK School of Electronic and Electrical

More information

A PROBABILISTIC INTERPRETATION OF SAMPLING THEORY OF GRAPH SIGNALS. Akshay Gadde and Antonio Ortega

A PROBABILISTIC INTERPRETATION OF SAMPLING THEORY OF GRAPH SIGNALS. Akshay Gadde and Antonio Ortega A PROBABILISTIC INTERPRETATION OF SAMPLING THEORY OF GRAPH SIGNALS Akshay Gadde and Antonio Ortega Department of Electrical Engineering University of Southern California, Los Angeles Email: agadde@usc.edu,

More information

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Worst Case and Average Case Behavior of the Simplex Algorithm

Topics in Probability Theory and Stochastic Processes Steven R. Dunbar. Worst Case and Average Case Behavior of the Simplex Algorithm Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebrasa-Lincoln Lincoln, NE 68588-030 http://www.math.unl.edu Voice: 402-472-373 Fax: 402-472-8466 Topics in Probability Theory and

More information

in a Rao-Blackwellised Unscented Kalman Filter

in a Rao-Blackwellised Unscented Kalman Filter A Rao-Blacwellised Unscented Kalman Filter Mar Briers QinetiQ Ltd. Malvern Technology Centre Malvern, UK. m.briers@signal.qinetiq.com Simon R. Masell QinetiQ Ltd. Malvern Technology Centre Malvern, UK.

More information

STOCHASTIC STABILITY OF EXTENDED FILTERING FOR NONLINEAR SYSTEMS WITH MEASUREMENT PACKET LOSSES

STOCHASTIC STABILITY OF EXTENDED FILTERING FOR NONLINEAR SYSTEMS WITH MEASUREMENT PACKET LOSSES Proceedings of the IASTED International Conference Modelling, Identification and Control (AsiaMIC 013) April 10-1, 013 Phuet, Thailand STOCHASTIC STABILITY OF EXTENDED FILTERING FOR NONLINEAR SYSTEMS WITH

More information

sparse and low-rank tensor recovery Cubic-Sketching

sparse and low-rank tensor recovery Cubic-Sketching Sparse and Low-Ran Tensor Recovery via Cubic-Setching Guang Cheng Department of Statistics Purdue University www.science.purdue.edu/bigdata CCAM@Purdue Math Oct. 27, 2017 Joint wor with Botao Hao and Anru

More information

/97/$10.00 (c) 1997 AACC

/97/$10.00 (c) 1997 AACC Optimal Random Perturbations for Stochastic Approximation using a Simultaneous Perturbation Gradient Approximation 1 PAYMAN SADEGH, and JAMES C. SPALL y y Dept. of Mathematical Modeling, Technical University

More information

On the Average Crossing Rates in Selection Diversity

On the Average Crossing Rates in Selection Diversity PREPARED FOR IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (ST REVISION) On the Average Crossing Rates in Selection Diversity Hong Zhang, Student Member, IEEE, and Ali Abdi, Member, IEEE Abstract This letter

More information

Latent Tree Approximation in Linear Model

Latent Tree Approximation in Linear Model Latent Tree Approximation in Linear Model Navid Tafaghodi Khajavi Dept. of Electrical Engineering, University of Hawaii, Honolulu, HI 96822 Email: navidt@hawaii.edu ariv:1710.01838v1 [cs.it] 5 Oct 2017

More information

Linear Regression. CSL603 - Fall 2017 Narayanan C Krishnan

Linear Regression. CSL603 - Fall 2017 Narayanan C Krishnan Linear Regression CSL603 - Fall 2017 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Univariate regression Multivariate regression Probabilistic view of regression Loss functions Bias-Variance analysis Regularization

More information

Semi-Supervised Learning by Multi-Manifold Separation

Semi-Supervised Learning by Multi-Manifold Separation Semi-Supervised Learning by Multi-Manifold Separation Xiaojin (Jerry) Zhu Department of Computer Sciences University of Wisconsin Madison Joint work with Andrew Goldberg, Zhiting Xu, Aarti Singh, and Rob

More information

Linear Regression. CSL465/603 - Fall 2016 Narayanan C Krishnan

Linear Regression. CSL465/603 - Fall 2016 Narayanan C Krishnan Linear Regression CSL465/603 - Fall 2016 Narayanan C Krishnan ckn@iitrpr.ac.in Outline Univariate regression Multivariate regression Probabilistic view of regression Loss functions Bias-Variance analysis

More information

POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES

POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES November 1, 1 POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES FRITZ KEINERT AND SOON-GEOL KWON,1 Abstract Two-direction multiscaling functions φ and two-direction multiwavelets

More information

Maximization of Submodular Set Functions

Maximization of Submodular Set Functions Northeastern University Department of Electrical and Computer Engineering Maximization of Submodular Set Functions Biomedical Signal Processing, Imaging, Reasoning, and Learning BSPIRAL) Group Author:

More information

Gaussian Message Passing on Linear Models: An Update

Gaussian Message Passing on Linear Models: An Update Int. Symp. on Turbo Codes & Related Topics, pril 2006 Gaussian Message Passing on Linear Models: n Update Hans-ndrea Loeliger 1, Junli Hu 1, Sascha Korl 2, Qinghua Guo 3, and Li Ping 3 1 Dept. of Information

More information

Signal Processing - Lecture 7

Signal Processing - Lecture 7 1 Introduction Signal Processing - Lecture 7 Fitting a function to a set of data gathered in time sequence can be viewed as signal processing or learning, and is an important topic in information theory.

More information

Recognizing Tautology by a Deterministic Algorithm Whose While-loop s Execution Time Is Bounded by Forcing. Toshio Suzuki

Recognizing Tautology by a Deterministic Algorithm Whose While-loop s Execution Time Is Bounded by Forcing. Toshio Suzuki Recognizing Tautology by a Deterministic Algorithm Whose While-loop s Execution Time Is Bounded by Forcing Toshio Suzui Osaa Prefecture University Saai, Osaa 599-8531, Japan suzui@mi.cias.osaafu-u.ac.jp

More information

ADECENTRALIZED detection system typically involves a

ADECENTRALIZED detection system typically involves a IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 53, NO 11, NOVEMBER 2005 4053 Nonparametric Decentralized Detection Using Kernel Methods XuanLong Nguyen, Martin J Wainwright, Member, IEEE, and Michael I Jordan,

More information

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012 Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications

More information

Cross entropy-based importance sampling using Gaussian densities revisited

Cross entropy-based importance sampling using Gaussian densities revisited Cross entropy-based importance sampling using Gaussian densities revisited Sebastian Geyer a,, Iason Papaioannou a, Daniel Straub a a Engineering Ris Analysis Group, Technische Universität München, Arcisstraße

More information

Lecture 16: Introduction to Neural Networks

Lecture 16: Introduction to Neural Networks Lecture 16: Introduction to Neural Networs Instructor: Aditya Bhasara Scribe: Philippe David CS 5966/6966: Theory of Machine Learning March 20 th, 2017 Abstract In this lecture, we consider Bacpropagation,

More information

Long-Run Covariability

Long-Run Covariability Long-Run Covariability Ulrich K. Müller and Mark W. Watson Princeton University October 2016 Motivation Study the long-run covariability/relationship between economic variables great ratios, long-run Phillips

More information

ACCURATE ASYMPTOTIC ANALYSIS FOR JOHN S TEST IN MULTICHANNEL SIGNAL DETECTION

ACCURATE ASYMPTOTIC ANALYSIS FOR JOHN S TEST IN MULTICHANNEL SIGNAL DETECTION ACCURATE ASYMPTOTIC ANALYSIS FOR JOHN S TEST IN MULTICHANNEL SIGNAL DETECTION Yu-Hang Xiao, Lei Huang, Junhao Xie and H.C. So Department of Electronic and Information Engineering, Harbin Institute of Technology,

More information

Model Selection for Geostatistical Models

Model Selection for Geostatistical Models Model Selection for Geostatistical Models Richard A. Davis Colorado State University http://www.stat.colostate.edu/~rdavis/lectures Joint work with: Jennifer A. Hoeting, Colorado State University Andrew

More information

Channel-Aware Tracking in Multi-Hop Wireless Sensor Networks with Quantized Measurements

Channel-Aware Tracking in Multi-Hop Wireless Sensor Networks with Quantized Measurements Channel-Aware Tracing in Multi-Hop Wireless Sensor Networs with Quantized Measurements XIAOJUN YANG Chang an University China RUIXIN NIU, Senior Member, IEEE Virginia Commonwealth University ENGIN MASAZADE,

More information

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

5. Simulated Annealing 5.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini 5. Simulated Annealing 5.1 Basic Concepts Fall 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Real Annealing and Simulated Annealing Metropolis Algorithm Template of SA A Simple Example References

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

A Note on the Budgeted Maximization of Submodular Functions

A Note on the Budgeted Maximization of Submodular Functions A Note on the udgeted Maximization of Submodular Functions Andreas Krause June 2005 CMU-CALD-05-103 Carlos Guestrin School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Many

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information