Distributed Estimation using Bayesian Consensus Filtering

Size: px
Start display at page:

Download "Distributed Estimation using Bayesian Consensus Filtering"

Transcription

1 1 Distributed Estiation using Bayesian Consensus Filtering Saptarshi Bandyopadhyay Student Meber, IEEE, Soon-Jo Chung Senior Meber, IEEE 6 ariv: v3 [ath.oc] 13 Oct 216 Abstract We present the Bayesian consensus filter BCF for tracing a oving target using a networed group of sensing agents and achieving consensus on the best estiate of the probability distributions of the target s states. Our BCF fraewor can incorporate nonlinear target dynaic odels, heterogeneous nonlinear easureent odels, non-gaussian uncertainties, and higher-order oents of the locally estiated posterior probability distribution of the target s states obtained using Bayesian filters. If the agents cobine their estiated posterior probability distributions using a logarithic opinion pool, then the su of Kullbac Leibler divergences between the consensual probability distribution and the local posterior probability distributions is iniized. Rigorous stability and convergence results for the proposed BCF algorith with single or ultiple consensus loops are presented. Counication of probability distributions and coputational ethods for ipleenting the BCF algorith are discussed along with a nuerical exaple. I. INTRODUCTION In this paper, the ter consensus eans reaching an agreeent across the networ, regarding a certain subect of interest called the target dynaics. Distributed and networed groups of agents can sense the target, broadcast the acquired inforation, and reach an agreeent on the gathered inforation using consensus algoriths. Potential applications of distributed estiation tass include environent and pollution onitoring, tracing dust or volcanic ash clouds, tracing obile targets such as flying obects or space debris using distributed sensor networs, etc. Consensus algoriths are extensively studied in controls [1] [6], distributed optiization [7] [1], and distributed estiation probles [11] []. Strictly speaing, consensus is different fro the ter distributed estiation, which refers to finding the best estiate of the target, given a distributed networ of sensing agents. Many existing algoriths for distributed estiation [11] [2] ai to obtain the estiated ean first oent of the estiated probability distribution of the target dynaics across the networ, but cannot incorporate nonlinear target dynaics, heterogeneous nonlinear easureent odels, non-gaussian uncertainties, or higher-order oents of the locally estiated posterior probability distribution of the target s states. It is difficult to recursively cobine local ean and covariance estiates using a linear consensus algorith because the diension of the vector transitted by each agent increases linearly with tie due to correlated process noise [21] and The authors are with the Departent of Aerospace Engineering and Coordinated Science Laboratory, University of Illinois at Urbana-Chapaign, Urbana, IL 6181, USA. eail: bandyop2@illinois.edu; schung@illinois.edu This research was supported by AFOSR grant FA Fig. 1. A heterogeneous sensor networ tracs a target, whose current position is ared by. The sensing region of soe sensors are shown in light blue. The sensors use the Hierarchical BCF algorith to estiate the probability distribution of the target s position and reach a consensus across the networ x the covariance update equation is usually approxiated by a consensus gain [22]. Multi-agent tracing or sensing networs are deployed in a distributed fashion when the target dynaics have coplex teporal and spatial variations. Hence, it is necessary to preserve the coplete inforation captured in the locally estiated posterior probability distribution of the target s states while achieving consensus across the networ. For exaple, while tracing an orbital debris in space, the uncertainty in position along the direction of velocity is uch larger than that orthogonal to the velocity direction, and this extra inforation is lost if a linear consensus algorith is used to cobine the estiated ean fro ultiple tracing stations. As shown in Fig 1, the contour plot represents the consensual probability distribution of the target s final position, where the uncertainty along the velocity direction is relatively larger than that in the orthogonal direction. The ain obective of this paper is to extend the scope of distributed estiation algoriths to trac targets with general nonlinear dynaic odels with stochastic uncertainties, thereby addressing the aforeentioned shortcoings. Bayesian filters [23] [26] recursively calculate the probability density/ass function of the beliefs and update the based on new easureents. The ain advantage of Bayesian filters over Kalan filter based ethods [27], [28] for estiation of nonlinear target dynaic odels is that no approxiation

2 2 is needed during the filtering process. In other words, the coplete inforation about the dynaics and uncertainties of the odel can be incorporated in the filtering algorith. However, Bayesian filtering is coputationally expensive. Advances in coputational capability have facilitated the ipleentation of Bayesian filters for robotic localization and apping [29] [32] as well as planning and control [33] [35]. Practical ipleentation of these algoriths, in their ost general for, is achieved using particle filtering [26], [36] and Bayesian prograing [37], [38]. This paper focuses on developing a consensus fraewor for distributed Bayesian filters. The statistics literature deals with the proble of reaching a consensus aong individuals in a coplete graph, where each individual s opinion is represented as a probability distribution [39], [4]; and under select conditions, it is shown that consensus is achieved within the group [41] [43]. Exchange of beliefs in decentralized systes, under counication constraints, is considered in [44], [45]. Algoriths for cobining probability distributions within the exponential faily, i.e., a liited class of uniodal distributions that can be expressed as an exponential function, are presented in [46], [47]. If the target s states are discrete rando variables, then the local estiates can be cobined using a tree-search algorith [48] or a linear consensus algorith [49]. In contrast, this paper focuses on developing generalized Bayesian consensus algoriths with rigorous convergence analysis for achieving consensus across the networ without any assuption on the shape of local prior or posterior probability distributions. The proposed distributed estiation using Bayesian consensus filtering ais to reach an agreeent across the networ on the best estiate, in the inforation theoretic sense, of the probability distribution of the target s states. A. Paper Contributions and Organization In this paper, we assue that agents generate their local estiate of the posterior probability distribution of the target s states using Bayesian filters with/without easureent exchange with neighbors. Then, we develop algoriths for cobining these local estiates, using the logarithic opinion pool LogOP, to generate the consensual estiate of the probability distribution of the target s states across the networ. Finally, we introduce the Bayesian consensus filter BCF, where the local prior estiates of the target s states are first updated and the local posterior probability distributions are recursively cobined during the consensus stage, so that the agents can estiate the consensual probability distribution of the target s states while siultaneously aintaining consensus across the networ. The flowchart for the algorith is shown in Fig. 2 and its pseudo-code is given in Algorith 1. The first contribution of this paper is the LogOP based consensus algorith for cobining posterior probability distributions during the consensus stage and achieving consensus across the networ. As discussed in Section III-B, this is achieved by each agent recursively counicating its posterior probability distribution of the target s states with neighboring agents and updating its estiated probability distribution Start th cycle of th agent 1. Copute prior pdf 2. Measure the target 3. Obtain easureent array 4. Copute posterior pdf Exit 6. Set 5. for to. Consensus stage 5.1. Obtain the pdfs 5.2. Cobine using LogOP Fig. 2. Flowchart for BCF LogOP algorith describing the ey steps for a single agent in a single tie step. Steps 1 4 represent the Bayesian filtering stage, while step 5 represents the consensus stage. of the target s states using the LogOP. As shown in Fig. 3, cobining posterior probability distributions using the linear opinion pool LinOP typically results in ultiodal solutions, which are insensitive to the weights [4]. On the other hand, cobining posterior probability distributions using the LogOP typically results in uniodal, less dispersed solutions, thereby indicating a ointly preferred consensual distribution by the networ. Moreover, as discussed in Section III-B, the optial solution does not depend upon the choice of scale of the prior probability distribution and LogOP is is externally Bayesian [4] See Fig. 3 c-d. The KL divergence is the easure of the inforation lost when the consensual estiate is used to approxiate the locally estiated posterior probability distributions. In Theore 6, we show that the LogOP algorith on a strongly connected SC balanced graph iniizes the inforation lost during the consensus stage, i.e., the consensual probability distribution iniizes the su of KL divergences with the locally estiated posterior probability distributions. Methods for counicating probability distributions and the effects of inaccuracies on the consensual probability distribution are discussed in Section III-C. The second contribution of this paper is the BCF algorith presented in Section IV. As illustrated in Fig. 2 and Algorith 1, each agent generates a local estiate of the posterior probability distribution of the target s states using the Bayesian filter. Note that easureent exchanges with neighbors during the Bayesian filtering stage are not andatory and can be oitted. During the consensus stage, the LogOP algorith is executed ultiple ties to reach an agreeent across the networ. The nuber of consensus loops n loop N depends on the second

3 3 largest singular value of the atrix representing a SC balanced counication networ topology. Moreover, the convergence conditions for a given nuber of consensus loops are derived. Note that this consensual probability distribution fro the current tie step is used as the prior probability distribution in the next tie step, as shown in Fig 2. The novel features of the BCF algorith are: The algorith can be used to trac targets with general nonlinear tie-varying target dynaic odels. The algorith can be used by a SC balanced networ of heterogeneous agents with general nonlinear tie-varying easureent odels. The algorith achieves global exponential convergence across the networ to the consensual probability distribution of the target s states. The consensual probability distribution the best estiate, in the inforation theoretic sense because it iniizes the su of KL divergences with the locally estiated posterior probability distributions. If a central agent receives all the local posterior probability distributions and is tased to find the best estiate in the inforation theoretic sense, then it would also yield the sae consensual probability distribution. Hence, we clai to have achieved distributed estiation using the BCF algorith. The Hierarchical BCF algoriths, in Section IV-B, is used when soe of the agents do not observe the target. In Section V, we apply the Hierarchical BCF algorith to the proble of tracing orbital debris in space using the space surveillance networ on Earth. B. Notation The tie index is denoted by a right subscript. For exaple, x represents the true states of the target at the th tie instant. The target is always within the copact state space, i.e., x, N. Also, x 1: represents an array of the true states of the target fro the first to the th tie instant. The agent index is denoted by a lower-case right superscript. For exaple, z represents the easureent taen by the th agent at the th tie instant. The sybol P refers to probability of an event. F represents the estiated probability density function pdf of the target s states over the state space, by the th agent at the th tie instant. The sybol p also refers to pdf or probability ass function pf over the state space. During the consensus stage at the th tie instant, F,ν represents the local pdf of the target s states by the th agent at the ν th consensus step and F represents the consensual pdf to which each F,ν converges. Let be the Borel σ algebra on. The counication networ topology at the th tie instant is represented by the directed tie-varying graph G, where all the agents of the syste for the set of vertices V which does not change with tie and the set of directed edges is denoted by E. The neighbors of the th agent at the th tie instant is the set of agents fro which the th agent receives inforation at the th tie instant and is denoted by N, i.e., l N if and only if l E for all l, V. The set of inclusive neighbors of the th agent is denoted by J := N {}. Let N and R be the sets of natural nubers positive integers and real nubers respectively. The set of all by n atrices over the field of real nubers R is denoted by R n. Let λ and σ represent the eigenvalue and the singular value of a square atrix. Let 1 = [1, 1,..., 1] T, I,, and φ be the ones vector, the identity atrix, the zero atrix of appropriate sizes, and the epty set respectively. The sybols,, and sgn represent the absolute value, ceiling function, and signu function respectively. Let ln and log c represent the natural logarith and the logarith to the base c. Finally, lp represents the l p vector nor. The L p function denotes the set of all functions fx : R nx R with the bounded integral fx p dµx 1/p, where µ is a easure on. II. PRELIMINARIES In this section, we first state four assuptions used throughout this paper and then introduce the proble stateent of BCF. Next, we discuss an extension of the Bayesian filter to sensor fusion over a networ. Assuption 1. In this paper, all the algoriths are presented in discrete tie. Assuption 2. The state space R nx is closed and bounded. Hence, by the Heine Borel theore cf. [5, pp. 86], is copact. Assuption 3. All continuous probability distributions are upper-bounded by soe large value M R. Assuption 4. The inter-agent counication tie scale is uch faster than the tracing/estiation tie scale. Assuptions 1 and 2 are introduced to discretize the tie and bound the state space, so that the algoriths are coputationally tractable. Under these assuptions, particle filters [36], approxiate grid based filters, or histogra filters [32] can be used to execute the algoriths developed in this paper. Assuptions 2 and 3 are introduced to tae advantage of the results in inforation theory and easure theory, which deal with bounded functions on copact support. Under Assuption 4, the agents can execute ultiple consensus loops within each tracing tie step. We envisage that the results in this paper could be extended to continuous tie if the Foer Plan equations are solved efficiently [51] and additional issues due to counication delay and tie scale separation are addressed. Under Assuption 1, we do not discuss continuous tie related issues in this paper. Next, we show that discrete and continuous probability distributions can be handled in a unified anner. Rear 1. Let be the Borel σ algebra for. The probability of a set A ay be written as the Lebesgue Stieltes integral PA = px dµx, where µ is a A easure on. In the continuous case, px is the pdf and µ is the Lebesgue easure. In the discrete case, px is the pf and µ is the the counting easure. Hence, in this paper, we only deal with pdfs over with µ as the Lebesgue easure. Siilar arguents will also wor for pfs or ixed probability distributions.

4 4 A. Proble Stateent Let R nx be the n x -diensional state space of the target. The dynaics of the target in discrete tie {x, N, x } is given by: x = f x 1, v 1, 1 where f : R nx R nv R nx is a possibly nonlinear tievarying function of the state x 1 and an independent and identically distributed i.i.d. process noise v 1, where n v is the diension of the process noise vector. Let heterogeneous agents siultaneously trac this target and estiate the pdf of the target s states where does not change with tie. The easureent odel of the th agent is given by: z = h x, w, {1,..., }, 2 where h : Rnx R nw R nz is a possibly nonlinear tievarying function of the state x and an i.i.d. easureent noise w, where n z, n w are diensions of the easureent and easureent noise vectors respectively. Note that the easureent odel of agents is quite general since it accoodates heterogeneous sensors with various bandwidths, ranges, and noise characteristics and partial state observation. The obective of the BCF is to estiate the target s states and aintain consensus across the networ. This obective is achieved in two steps: i each agent locally estiates the pdf of the target s states using a Bayesian filter, and ii each agent s local estiate converges to a global estiate during the consensus stage see Fig. 1. The obective of Bayesian filtering with/without easureent exchange, discussed in Section II-B, is to estiate the posterior pdf of the target s states at the th tie instant, which is denoted by F, {1,..., }, using the estiated prior pdf of the target s states F 1 fro the 1th tie instant and the new easureent array obtained at the th tie instant. The obective of the consensus stage, discussed in Section III, is to guarantee pointwise convergence of each estiated pdf F,ν to the consensual pdf F. B. Bayesian Filter with Measureent Exchange A Bayesian filter consist of two steps: i the prior pdf of the target s states is obtained during the prediction stage, and ii the posterior pdf of the target s states is updated using the new easureent array during the update stage [23] [26]. The Bayesian filter gives the exact posterior probability distribution, hence it is the best possible estiate of the target fro the available inforation. Exchange of easureents is optional since heterogeneous agents, with different priors, fields of view, resolutions, tolerances, etc., ay not be able to cobine easureents fro other agents. For exaple, if a satellite in space and a low flying quadrotor are observing the sae target, then they cannot exchange easureents due to their different fields of view. Furtherore, a centralized estiator ay not be able to cobine easureents fro all heterogeneous agents in the networ to estiate p x z {1,...,}, because it would have to use a coon prior for all the agents. Hence, in this paper, we let the individual agents generate their own posterior pdfs of the target s states and then cobine the to get the best estiated pdf fro the networ. If an agent can cobine easureents fro another neighboring agent during its update stage, then we call the easureent neighbors. In this section, we extend the Bayesian filter by assuing that each agent transits its easureents to other agents in the networ, and receives the easureents fro its easureent neighbors. Let z S := {z l, l S } denote the array of easureents taen at the th tie instant by the easureent neighbors of the th agent, where S J denotes the set of easureent neighbors aong the inclusive neighbors of the th agent. Next, we assue that the prior is available at the initial tie. Assuption 5. For each agent, the initial prior of the states F = p x, is assued to be available. In case no nowledge about x is available, F is assued to be uniforly distributed over. In Bayesian Filtering with Measureent Exchanges, the th agent estiates the posterior pdf of the target s states F = p x z S at the th tie instant using the estiated consensual pdf of the target s states F 1 = p 1 x 1 fro the 1 th tie instant and the new easureent array z S obtained at the th tie instant. The prediction stage involves using the target dynaics odel 1 to obtain the estiated pdf of the target s states at the th tie instant via the Chapan Kologorov equation: ˆ p x = p x x 1 p 1 x 1 dµx 1 3 The probabilistic odel of the state evolution p x x 1 is defined by the target dynaics odel 1 and the nown statistics of the i.i.d. process noise v 1. Proposition 1. The new easureent array z S is used to copute the posterior pdf of the target s states F = p x z S p x z S = during the update stage using Bayes rule 4: l S p l zl x p x, 4 l S p l zl x p x dµx The lielihood function p l zl x, l S is defined by the easureent odel 2, and the corresponding nown statistics of the i.i.d. easureent noise w l. Proof: We need to show that the ter pz S x in the Bayesian filter [26] siplifies to l S p l zl x. Let the agents r, r + 1,... be easureent neighbors of the th agent at the th tie instant i.e., r, r + } 1,... N. Let us define z S \{r} := {z l, l S \ {r} as the easureent array obtained by the th agent at the th tie instant, which does not contain the easureent fro the r th agent. Since the easureent noise is i.i.d., 2 describes a Marov process of order one, we get:

5 5 pz S, x px = pzs, x pz S \{r}, x =pz r z S \{r}, x pz r+1 pz S \{r}, x pz S \{r,r+1}, x z S \{r,r+1}... pz, x px, x... pz x Thus, we obtain pz S x = pz S, x /px = p l zl x. l S If the estiates are represented by pfs, then we can copare these estiates using entropy [52, pp. 13], which is a easure of the uncertainty associated with a rando variable or its inforation content. Rear 2. Let Y be a rando variable with the pf p x z given by a stand-alone Bayesian filter [26], Y S is a rando variable with the pf p x z S given by Bayesian filter with easureent exchange, and H refers to the entropy of the rando variable. Since Y S is obtained by conditioning Y with zs \{} because p x z S = p x z, zs \{}, the clai follows fro the theore on conditioning reduces entropy cf. [52, pp. 27]: IY ; zs \{} = HY HY zs \{}, where I ; refers to the nonnegative utual inforation between two rando variables. Since HY S = HY zs \{} HY, the estiate given by Bayesian filter with easureent exchange is ore accurate than that obtained by a stand-alone Bayesian filter. Note that 4 is siilar to the epirical equation for Independent Lielihood Pool given in [35] and a generalization of the Distributed Sequential Bayesian Estiation Algorith given in [53]. The structure of 4 ensures that an arbitrary part of the prior distribution does not doinate the easureents. There is no consensus protocol across the networ because each agent receives inforation only fro its neighboring agents and never receives easureents even indirectly fro any other agent in the networ. III. COMBINING PROBABILITY DISTRIBUTIONS In this section, we present the algoriths for achieving consensus in probability distributions across the networ. As discussed before, the obective of the consensus stage in Algorith 1 is to guarantee pointwise convergence of each F to a consensual pdf F, which is independent of. This is achieved by each agent recursively transitting its estiated pdf of the target s states to other agents, receiving estiates of its neighboring agents, and updating its estiate of the target. Let F, = F represent the local estiated posterior pdf of the target s states, by the th agent at the start of the consensus stage, obtained using Bayesian filters with/without easureent exchange. During each of the n loop iterations within the consensus stage in Algorith 1, this estiate is updated as follows: F,ν =T l J {F,ν 1} l, {1,..., }, ν N, 5 where T is the linear or logarithic opinion pool for cobining the pdf estiates. Note that the proble of easureent neighbors does not arise here since all pdfs are expressed over the coplete state space. We introduce Lea 2 to show that pointwise convergence of pdfs is the sufficient condition for convergence of their induced easures in total variation TV. Let F 1,..., li n F n, F be real-valued easurable functions on, be the Borel σ-algebra of, and A be any set in. If µ Fn A = A F nxdµx for any set A, then µ Fn is defined as the easure induced by the function F n on. Let µ Fn, µ F denote the respective induced easures of F n, F on. Definition 3. Convergence in TV If µ Fn µ F TV := sup A µ Fn A µ F A tends to zero as n, then the easure µ Fn converges to the easure µ F in TV, i.e., li n µ Fn µ F. Lea 2. Pointwise convergence iplies convergence in TV If F n converges to F pointwise, i.e., li n F n = F pointwise; then the easure µ Fn converges in TV to the easure µ F, i.e., li n µ Fn µ F. Proof: Siilar to the proof of Scheffé s theore [54, pp. 84], under Assuption 3, using the doinated convergence theore cf. [54, Theore 1.5.6, pp. 23] for any set A gives: li n ˆ A ˆ F n xdµx= A ˆ li F nxdµx= F xdµx. n This relation between easures iplies that li n µ Fn µ F TV = and li n µ Fn µ F. A. Consensus using the Linear Opinion Pool The first ethod of cobining the estiates is otivated by the linear consensus algoriths widely studied in the literature [5] [7]. The pdfs are cobined using the Linear Opinion Pool LinOP of probability easures [39], [4]: F,ν = a l,ν 1 F,ν 1, l {1,..., }, ν N, 6 l J a l where l J,ν 1 = 1 and the updated pdf F,ν after the ν th consensus loop is a weighted average of the pdfs of the inclusive neighbors F,ν 1 l, l J fro the ν 1 th consensus loop, at the th tie instant. Let W,ν := T F,ν,ν 1,..., F denote an array of pdf estiates of all the A agents after the ν th consensus loop, then the LinOP 6 can be expressed concisely as: W,ν = P,ν 1 W,ν 1, ν N, 7 where P,ν 1 is a atrix with entries P,ν 1 [, l] = a l,ν 1. Assuption 6. The counication networ topology of the ulti-agent syste G is strongly connected SC. The weights a l,ν 1,, l {1,..., } and the atrix P,ν 1 have the following properties: i the weights are the sae for all consensus loops within each tie instant, i.e., a l,ν 1 = al

6 6 and P,ν 1 = P, ν N; ii the atrix P confors with the graph G, i.e., a l > if and only if l J, else al = ; and iii the atrix P is row stochastic, i.e., l=1 al = 1. Theore 3. Consensus using the LinOP on SC Digraphs Under Assuption 6, using the LinOP 6, each F,ν asyptotically converges pointwise to the pdf F = π if, i where π = [π 1,... π ] T is the unique stationary distribution of P. Furtherore, their induced easures converge in total variation, i.e., li ν µ F µ F,ν, {1,..., }. Proof: See Appendix A. Theore 3 is a generalization of the linear consensus algorith for cobining oint easureent probabilities [55]. Moreover, if π = 1 and each F, is a L 2 function, then F = 1 F, i globally iniizes the su of the squares of L 2 distances with the locally estiated posterior pdfs. As shown in Fig. 3 a-b, the ain difficulty with the LinOP is that the resulting solution is typically ultiodal, so no clear choice for ointly preferred estiate eerges fro it [4]. Moreover, the LinOP algorith critically depends on the assuption that the sae -1 scale is used by every agent as shown in Fig. 3 c-d. Hence, better schees for cobining probability distributions are needed for the proposed BCF algorith. B. Consensus using the Logarithic Opinion Pool Note that F,ν = p,ν x, x represents the pdf of the estiated target s states by the th agent during the ν th consensus loop at the th tie instant. The LogOP is given as [41]: F,ν =p,ν x = a l l J p l,ν 1 x,ν 1 l J p l,ν 1 x a l,ν 1 dµx {1,..., }, ν N, 8 where l J a l,ν 1 = 1 and the integral in the denoinator of 8 is finite. Thus the updated pdf F,ν,ν 1 after the ν th consensus loop is the weighted geoetric average of the pdfs of the inclusive neighbors F,ν 1 l, l J fro the ν 1 th consensus loop, at the th tie instant. As shown in Fig. 3 a-b, the LogOP solution is typically uniodal and less dispersed, indicating a consensual estiate ointly preferred by the networ [4]. As shown in Fig. 3 c-d, the LogOP solution is invariant under under rescaling of individual degrees of belief, hence it preserves an iportant credo of uni Bayesian decision theory; i.e., the optial decision should not depend upon the choice of scale for the utility function or prior probability distribution [56]. When the paraeter space is finite and a -1 probability scale is adopted, the LogOP is equivalent to the Nash product [57]. Note that if the local probability distribution of the target s states is inherently ultiodal, as shown in Fig. 3 e-f, then LogOP preserves this ultiodal nature while cobining these local estiates. The ost copelling reason for using LogOP is that it is, f 1x f 2x x a f 1x f 2x 1 x c f3x f4x x e f LinOP x f LogOP x α 1 = x b f LinOP x f LogOP x α 1 = x.1 d f LinOP x f LogOP x α1 = x Fig. 3. In a, two uniodal pdfs f 1 x and f 2 x are shown. In b, these pdfs are cobined using the LinOP and LogOP using the weight α 1 =.5, i.e., f LinOP x = α 1 f 1 x + 1 α 1 f 2 x and f LogOP x = f α 1 1 f 1 α 1 2 / f α 1 1 f 1 α 1 2 dµx. Note that the LinOP solution is ultiodal while the LogOP solution is uniodal, indicating a consensual pdf. In c, the scale of the function f 2 x is changed to -1 fro the standard -1 scale. In d, the noralized LinOP solution changes drastically but the LogOP solution reains unaffected. In e, the pdfs f 3 x and f 4 x have biodal nature. In f, the LogOP solution preserves this biodal nature. externally Bayesian; i.e., finding the consensus distribution coutes with the process of revising distributions using a coonly agreed lielihood distribution. Thus [4]: { lxp l T x } l J lxpl xdµx lxt = lxt l J l J f {p l x}, 9 {p l x} dµx where T refers to the LogOP 8, p l x, l J are pdfs on and lx is an arbitrary lielihood pdf on. Due to these advantages, LogOP is used for cobining prior distributions [58] and conditional rando fields for natural language processing tass [59]. Next, we present consensus theores using the LogOP. Assuption 7. The local estiated pdf at the start of the consensus stage is positive everywhere, i.e., F, = p, x >, x, {1,..., }. Assuption 7 is introduced to avoid regions with zero probability, since they would constitute vetoes and unduly great ephasis would get placed on the. Moreover, the LogOP

7 7 guarantees that F,ν consensus loop. will reain positive for all subsequent Definition 4. H,ν vector for LogOP For the purpose of analysis, let us choose ψ such that p,ν ψ [ >, ] {1,..., }, ν N. Let us define H,ν := ln p,ν x. p,ν ψ Under Assuption 7, H,ν is a well-defined function, but need not be a L 1 function. Then, by siple algebraic anipulation of 8, we get [6]: l J p l,ν 1 x al,ν 1 p,ν x p,ν ψ = l J p l,ν 1 x al,ν 1 dµx, H,ν = l J l J l J p l,ν 1 ψ al,ν 1 p l,ν 1 x al,ν 1 dµx a l,ν 1 Hl,ν 1, {1,..., }, ν N. 1 Note that 1 is siilar to the LinOP 6. Let U,ν := H 1,ν,..., H,ν T be an array of the estiates of all the agents during the ν th consensus loop at the th tie instant, then the equation 1 can be expressed concisely as: U,ν = P,ν 1 U,ν 1, ν N, 11 where P,ν 1 is a atrix with entries a l,ν 1. Thus we are able to use the highly nonlinear LogOP for cobining the pdf estiates, but we have reduced the coplexity of the proble to that of consensus using the LinOP. Theore 4. Consensus using the LogOP on SC Digraphs Under Assuptions 6 and 7, using the LogOP 8, each F,ν asyptotically converges pointwise to the pdf F given by: F = p p i, x πi x = p i, x πi, 12 dµx where π is the unique stationary distribution of P. Furtherore, their induced easures converge in total variation, i.e., li ν µ F µ F,ν, {1,..., }. Proof: Siilar to the proof of Theore 3, each H,ν converges pointwise to H = π T U, = π ih, i asyptotically. We additionally need to show that convergence of H,ν to H iplies pointwise convergence of F,ν to F. We have x : li ln p ν,ν x ln p,ν ψ =ln p x ln p ψ. 13 We clai ψ such that li ν p,ν ψ = p ψ. If this clai is untrue, then < li ν p,ν x < p x, x or vice versa. Hence li ν p,ν x dµx = 1 < p x dµx, which results in contradiction since p x is also a pdf. Hence, substituting ψ into equation 13 gives li ν p,ν x = p x, x. Thus each F,ν converges pointwise to the consensual pdf F given by 12. By Lea 2, the easure induced by F,ν on converges in total variation to the easure induced by F on, i.e., li ν µ F µ F,ν. Since Perron Frobenius theore only yields asyptotic convergence, we next discuss the algorith for achieving global exponential convergence using balanced graphs. Assuption 8. In addition to Assuption 6, the weights a l are such that the digraph G is balanced. Hence for every vertex, the in-degree equals the out-degree, i.e., l J a r, where, l, r {1,..., }. r J r a l = Theore 5. Consensus using the LogOP on SC Balanced Digraphs Under Assuption 7 and 8, using the LogOP 8, each F,ν globally exponentially converges pointwise to the pdf F given by: 1 F = p p i, x x = 1 14 p i, x dµx at a rate faster or equal to λ 1 P T P = σ 1 P. Furtherore, their induced easures globally exponentially converge in total variation, i.e., li ν µ F µ F,ν, {1,..., }. Proof: Since Assuption 8 is stronger than Assuption 6, we get li ν P ν = 1πT. Moreover, since P is also a colun stochastic atrix, therefore π = 1 1 is its left eigenvector corresponding to the eigenvalue 1, i.e., P T 1 1 = and satisfying the noralizing condition. Hence, we get li ν P ν = 1 11T and each H,ν converges pointwise to H = 1 1T U, = 1 Hi,. Fro the proof of Theore 4, we get that each F,ν converges pointwise to the consensual pdf F given by 14. Note that F,ν are L [ 1 functions ] but H,ν need not be L 1 1 functions. Let V tr = 1, V s be the orthonoral atrix of eigenvectors of the syetric priitive atrix P T P. By spectral decoposition [[61], we get: ] Vtr T P T 1 P V tr = Vs T P T P, V s where 1 1T P T P 1 1 = 1, 1 T P T P V s = 1 1, and Vs T P T P 1 1 = 1 1 are used. Since the eigenvectors are orthonoral, V s Vs T T = I. The rate at which 1 U,ν synchronizes to is equal to the rate at which Vs T substituting V T 1 or U U,ν 1 1. Pre-ultiplying 11 by Vs T s 1 = results in: Vs T U,ν = Vs T P V s Vs T T U,ν 1 = Vs T P V s Vs T U,ν 1. and Let z,ν = Vs T U,ν. The corresponding virtual dynaics is represented by z,ν = Vs T P V s z,ν 1, which has both Vs T U,ν and as particular solutions. Let Φ,ν = z T,ν z,ν be a candidate Lyapunov function for this dynaics. Expanding this gives:

8 8 Φ,ν =z T,ν 1Vs T P T P V s z,ν 1 λ ax Vs T P T P V s Φ,ν 1. Note that Vs T P T P V s contains all the eigenvalues of P T P other than 1. Hence λ ax Vs T P T P V s = λ 1 P T P < 1 and Φ,ν globally exponentially vanishes with a rate faster or equal to λ 1 P T P. Hence each H,ν globally exponen- converges pointwise to H with a rate faster or equal to tially λ 1 P T P = σ 1 P. Next, we need to find the rate of convergence of F,ν to F. Fro the exponential convergence of H,ν, we get: [ p ln,ν x ] p ψ p x p,ν ψ [ p σ 1 P ln,ν 1 x ] p ψ p x p,ν 1 ψ. Let us define the function α,ν x such that α,ν x = [ ] p,ν x p p x ψ p,ν ψ [ α,ν x = p x p,ν x if p,ν x p ψ p x p,ν ψ and ] p,ν ψ p ψ otherwise. Note that α,ν x is a continuous function since it is a product of continuous functions. Since α,ν x 1 and ln α,ν x, x, siplifies to: ln α,ν x σ 1 P ln α,ν 1 x. σ 1P α,ν x α, x ν. 16 Since p,ν x tends to p x, i.e., li ν α,ν x = 1, we can write 16 as: σ 1P α,ν x 1 α, x ν 1 σ 1P ν. 17 Using the ean value theore cf. [5], the right hand side of 17 can be siplified to 18, for soe c [1, α, x ]. σ 1P α, x ν 1 σ 1P ν = σ 1 P ν c σ 1P ν 1 α, x As σ 1 P < 1, the axiu value of c σ 1P ν 1 is 1. Substituting this result into 17 gives: α,ν x 1 σ 1 P ν α, x Hence α,ν x exponentially converges to 1 with a rate faster or equal to σ 1 P. Irrespective of the orientation of α,ν x and α, x, 19 can be written as 2 by 1 ultiplying with or 1, and then with α,ν x α, x p x. p ψ p,ν ψ p,ν x p x σ 1 P ν p ψ p, ψ p, x p x. 2 As shown in the proof of Theore 4, we can choose ψ such that p, ψ = p ψ. Now we discuss two cases to reduce the left hand side of 2 to p,ν x p x. p ψ p,ν ψ p,ν x p x p,ν x p x p + ψ p,ν ψ 1 p,ν x p if ψ = p,ν ψ 1 p x p,ν x + 1 p ψ p,ν ψ p,ν x p if ψ p,ν ψ < 1 p x p,ν x. Hence, for both the cases, we are able to siplify 2 to: p,ν x p x σ 1 P ν p, x p x. Thus each F,ν = p,ν x globally exponentially converges to F = p x with a rate faster or equal to σ 1 P. The KL divergence is a easure of the inforation lost when the consensual pdf is used to approxiate the locally estiated posterior pdfs. We now show that the consensual pdf F obtained using Theore 5, which is the weighted geoetric average of the locally estiated posterior pdfs F,, {1,..., }, iniizes the inforation lost during the consensus stage because it iniizes the su of KL divergences with those pdfs. Theore 6. The consensual pdf F given by 14 globally iniizes the su of Kullbac Leibler KL divergences with the locally estiated posterior pdfs at the start of the consensus stage F,, {1,..., }, i.e., F = arg in D KL ρ F i,, 21 ρ L 1 where L 1 is the set of all pdfs over the state space satisfying Assuption 7. Proof: The su of the KL divergences of a pdf ρ L 1 with the locally estiated posterior pdfs is given by: D KL ρ F i, = ˆ ρx lnρx ρx lnp i,x dµx. 22 Under Assuption 7, D KL ρ F, i is well defined for all agents. Differentiating 22 with respect to ρ using Leibniz integral rule [54, Theore A.5.1, pp. 372], and equating it to zero gives: ˆ lnρx + 1 lnp i,x dµx =, 23 where ρ x = 1 e pi, x 1/ is the solution to 23. The proection of ρ on the set L 1, obtained by

9 9 noralizing ρ to 1, is the consensual pdf F L 1 given by 14. The KL divergence is a convex function of pdf pairs [52, Theore 2.7.2, pp. 3], hence the su of KL divergences 22 is a convex function of ρ. If ρ 1, ρ 2,..., ρ n L 1 and η 1, η 2,..., η n ρ n [, 1] such that η i = 1, then = n η iρ i L 1 ; because i since ρ i x >, x, i {1,..., n} therefore ρ x >, x ; and ii since ρ ix dµx = 1, i {1,..., n} therefore ρ x dµx = 1. Moreover, since is a copact set, therefore L 1 is a closed set. Hence L 1 is a closed convex set. Hence 21 is a convex optiization proble. The gradient of D KL constant, i.e., d dρ D KL ρ F i, ρ=f = ln ρ F i, evaluated at F is a e p i, x 1. dµx This indicates that for further iniizing the convex cost function, we have to change the noralizing constant of F, which will result in exiting the set L 1. Hence F is the global iniu of the convex cost function 21 in the convex set L 1. This is illustrated using a siple exaple in Fig. 4. Another proof approach involves taing the logarith, in the KL divergence forula, to the base 1 c := p i, x dµx. Then differentiating D KL ρ F, i with respect to ρ gives: ˆ logc ρx + 1 log c p i,x dµx =, which is iniized by F. Hence F is indeed the global iniu of the convex optiization proble 21. Note that if a central agent receives all the locally estiated posterior pdfs F,, {1,..., } and is tased to find the best estiate in the inforation theoretic sense, then it would also yield the sae consensual pdf F given by 14. Hence we clai to have achieved distributed estiation using this algorith. In Rear 5, we state that the ethods for recursively cobining probability distributions to reach a consensual distribution are liited to LinOP, LogOP, and their affine cobinations. Rear 5. The LinOP and LogOP ethods for cobining probability distributions can be generalized by the g Quasi Linear Opinion Pool g QLOP, which is described by the following equation: F,ν = g 1 l J α l,ν 1 gf,ν 1 l g 1 l J α l,ν 1 gf,ν 1 l dµx, {1,..., }, ν N, 24 where g is a continuous, strictly onotone function. It is shown in [6] that, other than the linear cobination of LinOP and LogOP, there is no function g for which the final consensus can be expressed by the following equation: Probx L 1 ρ F.5 1 Probx 1 a DKL ρ F i F 1 F 2 F 3 F.5 1 Probx 1 Fig. 4. Let the discrete state space have only two states x 1 and x 2. All valid pfs ust lie on the set L 1 where Px 1 + Px 2 = 1. Given three initial pfs F i, i = {1, 2, 3}, the obective is to find the pf that globally iniizes the convex cost function 3 D KL ρ F i. In a, ρ = 1 3 e F i1/ globally iniizes the cost function, but it does not lie on L 1. In b, F L 1, which is the proection of ρ on the set L 1 obtained by noralizing ρ to 1, indeed globally iniizes the cost function on the set L 1. li F ν,ν =F g 1 =1 = π gf, l g 1 =1 π gf, l dµx, {1,..., }, ν N, 25 where π is the unique stationary solution. Moreover, the function g is said to be -Marovian if the schee for cobining probability distribution 24 yields the consensus 25 for every regular counication networ topology and for all initial positive densities. It is also shown that g is -Marovian if and only if the g QLOP is either LinOP or LogOP [6]. C. Counicating Probability Distributions The consensus algoriths using either LinOP or LogOP need the estiated pdfs to be counicated to other agents in the networ. We propose to adopt the following ethods for counicating pdfs. The first approach involves approxiating the pdf by a weighted su of Gaussians and then transitting this approxiate distribution. Let N x i, B i denote the Gaussian density function, where the ean is the n x -vector i and the covariance is the positive-definite syetric atrix B i. The Gaussian su approxiations lea of [62, pp. 213] states that any pdf F = px can be approxiated as closely as desired in the L 1 R nx space by a pdf of the for ˆF = ˆpx = n g α in x i, B i, for soe integer n g and positive scalars α i with n g α i = 1. For an acceptable counication error ε co >, there exists n g, α i, i and B i such that F ˆF L1 ε co. Several techniques for estiating the paraeters are discussed in the Gaussian ixture odel literature, lie axiu lielihood ML and axiu a posteriori MAP paraeter estiation [63] [65]. Hence, in order to counicate the pdf ˆF, the agent needs to transit 1 2 n gn x n x + 3 real nubers. Let us study the effect of this counication error ε co on the LinOP consensual distribution. Let F,ν be the LinOP solution after cobining local pdfs corrupted by counication error, i.e., F,ν := T l J { ˆF,ν 1 l } where T is b

10 1 LinOP 6. We prove by induction that F,ν F,ν L 1 νε co, ν N, where F,ν is the true solution obtained fro uncorrupted local pdfs. As the basis of induction holds, the inductive step for the ν th consensus step is as follows: F,ν F,ν L 1 F,ν 1 l F,ν 1 l L1 l J a l,ν 1 + F l,ν 1 ˆF l,ν 1 L1 ν 1ε co + ε co. 26 Siilarly, it follows fro the proof of Theore 5 that the LogOP solution after n loop iterations F,n loop, under counication inaccuracies, is always within a ball of n loop ε co radius fro the true solution using LogOP F,n loop in the L 1 space, i.e., F,n loop F,n loop L1 n loop ε co. If particle filters are used to evaluate the Bayesian filter and cobine the pdfs [36], [65], then the resapled particles represent the agent s estiated pdf of the target. Hence counicating pdfs is equivalent to transitting these resapled particles. The inforation theoretic approach for counicating pdfs is discussed in [66]. Let the local pdf F,ν be transitted over a counication channel using a finite sequence and the pdf ˆF,ν is reconstructed by the other agent. For a given error threshold, the iniu rate such that the variational distortion between F,ν and ˆF,ν is bounded by the error threshold, is given by the utual inforation between transitted and received finite sequences. Now that we have established that counication of pdfs is possible, let us discuss the coplete BCF algorith. Algorith 1 BCF LogOP on SC Balanced Digraphs 1: one cycle of th agent during th tie instant 2: Given the pdf fro previous tie step F 1 = p 1 x 1 3: Set n loop, the weights a l } Theores 5, 7 4: while tracing do 5: Copute the prior pdf p x using 3 6: Copute the posterior pdf F = p x z S using 4 and easureent array z S 7: for ν = 1 to n loop 8: if ν = 1 then Set F, = F end if 9: Obtain the counicated pdfs F,ν 1, l l J 1: Copute the new pdf F,ν using the LogOP 8 end for 11: Set F = F,n loop end while Bayesian Filtering Stage Sec. II-B LogOP based Consensus Stage Sec. III-B IV. MAIN ALGORITHMS: BAYESIAN CONSENSUS FILTERING In this section, we finally solve the coplete proble stateent for BCF discussed in Section II-A and Algorith 1. We also introduce an hierarchical algorith that can be used when soe agents in the networ fail to observe the target. A. Bayesian Consensus Filtering The BCF is perfored in two steps: i each agent locally estiates the pdf of the target s states using a Bayesian filter with/without easureents fro neighboring agents, as discussed in Section II-B, and ii during the consensus stage, each agent recursively transits its pdf estiate of the target s states to other agents, receives estiates of its neighboring agents, and cobines the using the LogOP as discussed in Section III-B. According to [67], this strategy of first updating the local estiate and then cobining these local estiates to achieve a consensus is stable and gives the best perforance in coparison with other update cobine strategies. In this section, we copute the nuber of consensus loops n loop in Algorith 1 needed to reach a satisfactory consensus estiate across the networ and discuss the convergence of this algorith. Definition 6. Disagreeent vector θ,ν Let us define T θ,ν := θ,ν 1,..., θ,ν, where θ,ν := F,ν F L 1. Since the L 1 distance between pdfs is upper bounded by 2, the l 2 nor of the disagreeent vector θ,ν l2 is upper bounded by 2. This conservative bound is used to obtain the iniu nuber of consensus loops for achieving ε-consensus across the networ, while tracing a oving target. Let us now quantify the divergence of the local pdfs during the Bayesian filtering stage. Definition 7. Error propagation dynaics Γ Let us assue that the dynaics of the l 2 nor of the disagreeent vector during the Bayesian filtering stage can be obtained fro the target dynaics and easureent odels 1 and 2. The error propagation dynaics Γ estiates the axiu divergence of the local pdfs during the Bayesian filtering stage, i.e., θ, l2 Γ θ 1,nloop l2, where θ 1,nloop l2 is the disagreeent vector with respect to F 1 at the end of the consensus stage during the 1 th tie instant; and θ, l2 is the disagreeent vector with respect to F after the update stage during the th tie instant. Next we obtain the iniu nuber of consensus loops for achieving ε-consensus across the networ and also derive conditions on the counication networ topology for a given nuber of consensus loops. Theore 7. BCF LogOP on SC Balanced Digraphs Under Assuptions 5, 7, 8, and an acceptable counication error ε co >, each agent tracs the target using the BCF algorith. For soe acceptable consensus error ε cons > and γ = in Γ θ 1,nloop l2, 2 : i for a given P, if the nuber of consensus loops n loop satisfies σ 1 P n loop γ + 2n loop ε co εcons ; 27 or ii for a given n loop, if the counication networ topology P during the th tie instant is such that

11 11 1 εcons 2n loop ε co n loop σ 1 P ; 28 then the l 2 nor of the disagreeent vector at the end of the consensus stage is less than ε cons, i.e., θ,nloop l2 ε cons. Proof: In the absence of counication inaccuracies, Theore 5 states that the local estiated pdfs F, globally exponentially converges pointwise to a consensual pdf F given by 14 with a rate of σ 1 P, i.e. F,ν F L 1 σ 1 P ν F, F L 1. If θ, is the initial disagreeent vector at the start of the consensus stage, then θ,nloop l2 σ 1 P nloop θ, l2 σ 1 P nloop γ. In the presence of counication error, cobining 26 with the previous result gives F,ν F L 1 σ 1 P ν F, F L 1 + νε co. Since θ,ν F,ν F L 1 + νε co, the disagreeent vector after n loop iterations is given by θ,nloop l2 σ 1 P nloop θ, l2 + 2n loop ε co. Thus, we get the conditions on nloop or σ 1 P fro the inequality σ 1 P nloop γ + 2n loop ε co εcons. Note that in the absence of counication inaccuracies, γ lnεcons/γ ln σ 1P 27 siplifies to n loop and 28 siplifies 1 n ε to σ 1 P cons loop γ. In the particular case where n loop = 1 and counication errors are present, 28 siplifies to σ 1 P and the necessary εcons 2εco γ condition for a valid solution is 2ε co < εcons. In the genral case, it is desireable that 2ε co εcons for a valid solution to Theore 7. B. Hierarchical Bayesian Consensus Filtering In this section, we odify the original proble stateent such that only 1 out of agents are able to observe the target at the th tie instant. In this scenario, the other 2 = 1 agents are not able to observe the target. Without loss of generality, we assue that the first 1 agents, i.e., {1, 2,..., 1 }, are tracing the target. During the Bayesian filtering stage, each tracing agent i.e., agent tracing the target estiates the posterior pdf of the target s states at the th tie instant F = p x z S {1,...1}, {1,..., 1 } using the estiated prior pdf of the target s states F 1 and the new easureent array zs {1,...1} { } := z l, l S {1,... 1} obtained fro the neighboring tracing agents. Each non-tracing agent i.e., agent not tracing the target only propagates its prior pdf during this stage to obtain p x, { 1 + 1,..., }. The obective of hierarchical consensus algorith is to guarantee pointwise convergence of each F,ν, {1,..., } to a pdf F and only the local estiates of the agents tracing the target contribute to the consensual pdf. This is achieved by each tracing agent recursively transitting its estiate of the target s states to other agents, only receiving estiates fro its neighboring tracing agents and updating its estiate of the target. On the other hand, each non-tracing agent recursively transits its estiate of the target s states to other agents, receives estiates fro all its neighboring agents and updates its estiate of the target. This is illustrated using the pseudocode in Algorith 2 and the following equations: F,ν = T l J {1,...,1}{F,ν 1} l, {1,..., 1 }, ν N, 29 F,ν = T l J {F,ν 1} l, { 1 + 1,..., }, ν N, 3 where, T refers to the LogOP 8 for cobining pdfs. Let D represent the counication networ topology of only the tracing agents. Algorith 2 Hierarchical BCF LogOP on SC Balanced Digraphs 1: one cycle of th agent during th tie instant 2: Given the pdf fro previous tie step F 1 = p 1 x 1 3: Set n loop, the weights a l } Theores 7, 8 4: while tracing do 5: Copute prior pdf p x using 3 6: if 1 then 7: Copute the posterior pdf F using 4 and z S {1,..., 1} end if 8: for ν = 1 to n loop 9: if ν = 1 then 1: if 1 then Set F, = F 11: else Set F, = p x end if end if 12: if 1 then 13: Obtain the pdfs F,ν 1, l l J {1,..., 1} fro tracing neighbors 14: else Obtain the pdfs F,ν 1, l l J fro neighbors end if : Copute the new pdf F,ν using the LogOP 8 end for 16: Set F = F,n loop end while l J Bayesian Filtering Stage Sec. II-B Hierarchical LogOP based Consensus Stage Sec. IV-B Assuption 9. The counication networ topologies G and D are SC and the weights a l are such that the digraph D is balanced. The weights a l,ν 1,, l {1,..., } and the atrix P,ν 1 have the following properties: i The weights are the sae for all consensus loops within each tie instants, i.e., a l,ν 1 = al and P,ν 1 = P, ν N. Moreover, P can be decoposed into four parts P = [ P 1 P 2 ] P 3 P 4, where P 1 R 1 1, P 2 = R 1 2, P 3 R 2 1, and P 4 R 2 2. ii If {1,..., 1 }, then a l > if and only if l J {1,..., 1}, else a l = ; hence P 2 = 1 2. Moreover, P 1 is balanced, i.e., a l = r J a r r, where, l, r {1,..., 1}; iii

12 12 If { 1 + 1,..., }, then a l > if and only if l J, else a l = ; iv The atrix P is row stochastic, i.e., l=1 al = 1. Theore 8. Hierarchical Consensus using the LogOP on SC Balanced Digraphs Under Assuptions 7 and 9, using the LogOP 8, each F,ν globally exponentially converges pointwise to the pdf F given by: 1 1 F = p p i, x 1 x = 1 31 p i, x 1 dµx 1 at a rate faster or equal to λ 1 1P1 T P 1 = σ 1 1P 1. Only the initial estiates of the tracing agents contribute to the consensual pdf F. Furtherore, their induced easures converge in total variation, i.e., li ν µ F,ν {1,..., }. µ F, Proof: The atrix P 1 confors to the balanced digraph D. Let 1 1 = [1, 1,..., 1] T, with 1 eleents. Siilar to the proof of Theore 5, we get P 1 is a priitive atrix and li ν P1 ν = T 1. Next, we decopose U,ν fro equation 11 into two parts such that U,ν = [Y,ν ; Z,ν ], where Y,ν = T T H,ν,ν 1,..., and H1 Z,ν = H 1+1,ν,..., H,ν. Since P 2 is a zero atrix, 11 generalizes and hierarchically decoposes to: Y,ν+1 = P1Y ν,, ν N 32 Z,ν+1 = P 3 Y,ν + P 4 Z,ν, ν N 33 Cobining equation 32 with the previous result gives li ν Y,ν = T 1 Y,. Thus li ν H,ν = H = T 1 Y, = Hi,, {1,..., 1}. Fro the proof of Theore 5, we get F,ν, {1,..., 1} globally exponentially converges pointwise to F given by 31 with a rate faster or equal to σ 1 1P 1. Since G is strongly connected, inforation fro the tracing agents reach the non-tracing agents. Taing the liit of equation 33 and substituting the above result gives: li ν Z,ν+1 = 1 1 P T 1 Y, + P 4 li ν Z,ν 34 Let 1 2 = [1, 1,..., 1] T, with 2 eleents. Since P is row stochastic, we get P = [I P 4 ]1 2. Hence, fro equation 34, we get li ν Z,ν = T 1 Y,. Moreover, the inessential states die out geoetrically fast [68, pp. 12]. Hence li ν H,ν = H = T 1 Y,, { 1 + 1,..., }. Hence, the estiates of the non-tracing agents F,ν, { 1 + 1,..., } also converge pointwise geoetrically fast to the sae consensual pdf F given by 31. By Lea 2 we get li ν µ F µ F,ν, {1,..., }. Note that Theore 7 can be directly applied fro Section IV-A to find the iniu nuber of consensus loops n loop for achieving ɛ-convergence in a given counication networ topology or for designing the P 1 atrix for a given nuber of consensus loops. A siulation exaple of Hierarchical 9in 3 7in 6in 5in 97in 22 1in 1 3in 4in Fig. 5. The SSN locations are shown along with their static SC balanced counication networ topology. The orbit of the Iridiu 33 debris is shown in red, where ars its actual position during particular tie instants. BCF LogOP algorith for tracing orbital debris in space is discussed in the next section. V. NUMERICAL EAMPLE Currently, there are over ten thousand obects in Earth orbit, of size.5 c or greater, and alost 95% of the are nonfunctional space debris. These debris pose a significant threat to functional spacecraft and satellites in orbit. The US has established the Space Surveillance Networ SSN for ground based observations of the orbital debris using radars and optical telescopes [69], [7]. In February 29, the Iridiu 33 satellite collided with the Kosos 2251 satellite and a large nuber of debris fragents were created. In this section, we use the Hierarchical BCF LogOP Algorith to trac one of the Iridiu 33 debris created in this collision. The orbit of this debris around Earth and the location of SSN sensors are shown in Fig. 5. The actual two-line eleent set TLE of the Iridiu 33 debris was accessed fro North Aerican Aerospace Defense Coand NORAD on 4 th Dec 213. The nonlinear Siplified General Perturbations SGP4 odel, which uses an extensive gravitational odel and accounts for the drag effect on ean otion [71], [72], is used as the target dynaics odel. The counication networ topology of the SSN is assued to be a static SC balanced graph, as shown in Figure 5. If the debris is visible above the sensor s horizon, then it is assued to create a single easureent during each tie step of one inute. The heterogeneous easureent odel of the th sensor is given by: z = x + w, where w = N, I, where x R 3 is the actual location of the debris and the additive Gaussian easureent noise depends on the sensor nuber. Since it is not possible to ipleent the SGP4 target dynaics on distributed estiation algoriths discussed in the literature [11] [2], we copare the perforance of our Hierarchical BCF LogOP algorith Algorith 2 against the

13 13 18 All SSN 1th SSN 1 rd Particles of 3 SSN 22nd SSN n [Revs per day] 3rd SSN Tie [ins] 8 1 a 18 1 b 18 th Particles of SSN 17 n [Revs per day] n [Revs per day] 17 a = nd Particles of 1 SSN n [Revs per day] 5 Tie [ins] Tie [ins] c 1 5 Tie [ins] 1 d 25 5 b 75 = n [Revs per day] Nuber of Sensors c d Fig. 6. a Nuber of SSN sensors observing debris. Traectories of particles for stand-alone Bayesian filters for b 3rd, c 1th, and d 22nd SSN sensor. Fig. 7. Traectories of particles of all sensors for a Hierarchical BCF LinOP and b Hierarchical BCF LogOP. The color-bar on the right denotes the 33 SSN sensors. Evolution of the consensual probability distribution for c Hierarchical BCF LinOP and d Hierarchical BCF LogOP. Hierarchical BCF LinOP algorith, where the LinOP is used during the consensus stage. VI. C ONCLUSION In this siulation exaple, we siplify the debris tracing proble by assuing only the ean otion n of the debris is unnown. The obective of this siulation exaple is to estiate n of the Iridiu 33 debris within 1 inutes. Hence, each sensor nows the other TLE paraeters of the debris and an unifor prior distribution F is assued. Note that at any tie instant, only a few of the SSN sensors can observe the debris, as shown in Fig 6a. The results of three stand-alone Bayesian filters, ipleented using particle filters with resapling [36], are shown in Fig 6b-d. Note that the estiates of the 22nd and 1th sensors initially do not converge due to large easureent error, in spite of observing the debris for soe tie. The estiates of the 3rd sensor does converge when it is able to observe the debris after 7 inutes. Hence we propose to use the Hierarchical BCF LogOP algorith where the consensual distribution is updated as and when sensors observe the debris. Particle filters with resapling are used to evaluate the Bayesian filters and counicate pdfs in the Hierarchical BCF algoriths. 1 particles are used by each sensor and 1 consensus loops are executed during each tie step of one inute. The traectories of all the particles of the sensors in the Hierarchical BCF algorith using LinOP and LogOP and their respective consensual probability distributions at different tie instants are shown in Figure 6a-d. As expected, all the sensors converge on the correct value of n of 14.6 revs per day. The Hierarchical BCF LinOP estiates are ultiodal for the first 9 inutes. On the other hand, the Hierarchical BCF LogOP estiates converges to the correct value within the first 1 inutes because the LogOP algorith efficiently counicates the best consensual estiate to other sensors during each tie step and achieves consensus across the networ. In this paper, we extended the scope of distributed estiation algoriths in a Bayesian filtering fraewor in order to siultaneously trac targets, with general nonlinear tievarying target dynaic odels, using a strongly connected networ of heterogeneous agents, with general nonlinear tievarying easureent odels. We introduced the Bayesian filter with/without easureent exchange to generate local estiated pdfs of the target s states. We copared the LinOP and LogOP ethods of cobining local posterior pdfs and deterined that LogOP is the superior schee. The LogOP algorith on SC balanced digraph converges globally exponentially, and the consensual pdf iniizes the inforation lost during the consensus stage because it iniizes the su of KL divergences to each locally estiated probability distribution. We also explored several ethods of counicating pdfs across the sensor networ. We introduced the BCF algorith, where the local estiated posterior pdfs of the target s states are first updated using the Bayesian filter and then recursively cobined during the consensus stage using LogOP, so that the agents can trac a oving target and also aintain consensus across the networ. Conditions for exponential convergence of the BCF algorith and constraints on the counication networ topology have been studied. The Hierarchical BCF algorith, where soe of the agents do not observe the target, has also been investigated. Siulation results deonstrate the effectiveness of the BCF algoriths for nonlinear distributed estiation probles. ACKNOWLEDGMENT The authors would lie to than F. Hadaegh, D. Bayard, S. Hutchinson, P. Voulgaris, M. Egerstedt, A. Gupta, A. Dani, D. Morgan, S. Sengupta, and A. Olshevsy for stiulating discussions about this paper.

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008 LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

OPTIMIZATION in multi-agent networks has attracted

OPTIMIZATION in multi-agent networks has attracted Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Identical Maximum Likelihood State Estimation Based on Incremental Finite Mixture Model in PHD Filter

Identical Maximum Likelihood State Estimation Based on Incremental Finite Mixture Model in PHD Filter Identical Maxiu Lielihood State Estiation Based on Increental Finite Mixture Model in PHD Filter Gang Wu Eail: xjtuwugang@gail.co Jing Liu Eail: elelj20080730@ail.xjtu.edu.cn Chongzhao Han Eail: czhan@ail.xjtu.edu.cn

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Solutions of some selected problems of Homework 4

Solutions of some selected problems of Homework 4 Solutions of soe selected probles of Hoework 4 Sangchul Lee May 7, 2018 Proble 1 Let there be light A professor has two light bulbs in his garage. When both are burned out, they are replaced, and the next

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

INNER CONSTRAINTS FOR A 3-D SURVEY NETWORK

INNER CONSTRAINTS FOR A 3-D SURVEY NETWORK eospatial Science INNER CONSRAINS FOR A 3-D SURVEY NEWORK hese notes follow closely the developent of inner constraint equations by Dr Willie an, Departent of Building, School of Design and Environent,

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel 1 Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel Rai Cohen, Graduate Student eber, IEEE, and Yuval Cassuto, Senior eber, IEEE arxiv:1510.05311v2 [cs.it] 24 ay 2016 Abstract In

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Tracking using CONDENSATION: Conditional Density Propagation

Tracking using CONDENSATION: Conditional Density Propagation Tracking using CONDENSATION: Conditional Density Propagation Goal Model-based visual tracking in dense clutter at near video frae rates M. Isard and A. Blake, CONDENSATION Conditional density propagation

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013). A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

PAC-Bayes Analysis Of Maximum Entropy Learning

PAC-Bayes Analysis Of Maximum Entropy Learning PAC-Bayes Analysis Of Maxiu Entropy Learning John Shawe-Taylor and David R. Hardoon Centre for Coputational Statistics and Machine Learning Departent of Coputer Science University College London, UK, WC1E

More information

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 60, NO. 2, FEBRUARY ETSP stands for the Euclidean traveling salesman problem.

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 60, NO. 2, FEBRUARY ETSP stands for the Euclidean traveling salesman problem. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 60, NO., FEBRUARY 015 37 Target Assignent in Robotic Networks: Distance Optiality Guarantees and Hierarchical Strategies Jingjin Yu, Meber, IEEE, Soon-Jo Chung,

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography Tight Bounds for axial Identifiability of Failure Nodes in Boolean Network Toography Nicola Galesi Sapienza Università di Roa nicola.galesi@uniroa1.it Fariba Ranjbar Sapienza Università di Roa fariba.ranjbar@uniroa1.it

More information

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket.

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket. Generalized eigenfunctions and a Borel Theore on the Sierpinski Gasket. Kasso A. Okoudjou, Luke G. Rogers, and Robert S. Strichartz May 26, 2006 1 Introduction There is a well developed theory (see [5,

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Optical Properties of Plasmas of High-Z Elements

Optical Properties of Plasmas of High-Z Elements Forschungszentru Karlsruhe Techni und Uwelt Wissenschaftlishe Berichte FZK Optical Properties of Plasas of High-Z Eleents V.Tolach 1, G.Miloshevsy 1, H.Würz Project Kernfusion 1 Heat and Mass Transfer

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

Estimating Entropy and Entropy Norm on Data Streams

Estimating Entropy and Entropy Norm on Data Streams Estiating Entropy and Entropy Nor on Data Streas Ait Chakrabarti 1, Khanh Do Ba 1, and S. Muthukrishnan 2 1 Departent of Coputer Science, Dartouth College, Hanover, NH 03755, USA 2 Departent of Coputer

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

A method to determine relative stroke detection efficiencies from multiplicity distributions

A method to determine relative stroke detection efficiencies from multiplicity distributions A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are, Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

Effective joint probabilistic data association using maximum a posteriori estimates of target states

Effective joint probabilistic data association using maximum a posteriori estimates of target states Effective joint probabilistic data association using axiu a posteriori estiates of target states 1 Viji Paul Panakkal, 2 Rajbabu Velurugan 1 Central Research Laboratory, Bharat Electronics Ltd., Bangalore,

More information

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China 6th International Conference on Machinery, Materials, Environent, Biotechnology and Coputer (MMEBC 06) Solving Multi-Sensor Multi-Target Assignent Proble Based on Copositive Cobat Efficiency and QPSO Algorith

More information

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes

Graphical Models in Local, Asymmetric Multi-Agent Markov Decision Processes Graphical Models in Local, Asyetric Multi-Agent Markov Decision Processes Ditri Dolgov and Edund Durfee Departent of Electrical Engineering and Coputer Science University of Michigan Ann Arbor, MI 48109

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

Linear Algebra (I) Yijia Chen. linear transformations and their algebraic properties. 1. A Starting Point. y := 3x.

Linear Algebra (I) Yijia Chen. linear transformations and their algebraic properties. 1. A Starting Point. y := 3x. Linear Algebra I) Yijia Chen Linear algebra studies Exaple.. Consider the function This is a linear function f : R R. linear transforations and their algebraic properties.. A Starting Point y := 3x. Geoetrically

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES

More information

On random Boolean threshold networks

On random Boolean threshold networks On rando Boolean threshold networs Reinhard Hecel, Steffen Schober and Martin Bossert Institute of Telecounications and Applied Inforation Theory Ul University Albert-Einstein-Allee 43, 89081Ul, Gerany

More information

An Adaptive UKF Algorithm for the State and Parameter Estimations of a Mobile Robot

An Adaptive UKF Algorithm for the State and Parameter Estimations of a Mobile Robot Vol. 34, No. 1 ACTA AUTOMATICA SINICA January, 2008 An Adaptive UKF Algorith for the State and Paraeter Estiations of a Mobile Robot SONG Qi 1, 2 HAN Jian-Da 1 Abstract For iproving the estiation accuracy

More information

i ij j ( ) sin cos x y z x x x interchangeably.)

i ij j ( ) sin cos x y z x x x interchangeably.) Tensor Operators Michael Fowler,2/3/12 Introduction: Cartesian Vectors and Tensors Physics is full of vectors: x, L, S and so on Classically, a (three-diensional) vector is defined by its properties under

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Decentralized Adaptive Control of Nonlinear Systems Using Radial Basis Neural Networks

Decentralized Adaptive Control of Nonlinear Systems Using Radial Basis Neural Networks 050 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO., NOVEMBER 999 Decentralized Adaptive Control of Nonlinear Systes Using Radial Basis Neural Networks Jeffrey T. Spooner and Kevin M. Passino Abstract

More information

Analyzing Simulation Results

Analyzing Simulation Results Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient

More information

Tail estimates for norms of sums of log-concave random vectors

Tail estimates for norms of sums of log-concave random vectors Tail estiates for nors of sus of log-concave rando vectors Rados law Adaczak Rafa l Lata la Alexander E. Litvak Alain Pajor Nicole Toczak-Jaegerann Abstract We establish new tail estiates for order statistics

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

Ufuk Demirci* and Feza Kerestecioglu**

Ufuk Demirci* and Feza Kerestecioglu** 1 INDIRECT ADAPTIVE CONTROL OF MISSILES Ufuk Deirci* and Feza Kerestecioglu** *Turkish Navy Guided Missile Test Station, Beykoz, Istanbul, TURKEY **Departent of Electrical and Electronics Engineering,

More information