Statistics 102 for Multisource-Multitarget Detection and Tracking

Size: px
Start display at page:

Download "Statistics 102 for Multisource-Multitarget Detection and Tracking"

Transcription

1 100 Statistics 10 for Multisource-Multitarget Detection and Tracking Ronald Mahler Abstract This tutorial paper summarizes the motivations, concepts and techniques of finite-set statistics (FISST), a systemlevel, top-down, direct generalization of ordinary single-sensor, single-target engineering statistics to the realm of multisensor, multitarget detection and tracking. Finite-set statistics provides powerful new conceptual and computational methods for dealing with multisensor-multitarget detection and tracking problems. The paper describes how multitarget integro-differential calculus is used to extend conventional single-sensor, single-target formal Bayesian motion and measurement modeling to general tracking problems. Given such models, the paper describes the Bayes-optimal approach to multisensor-multitarget detection and tracking: the multisensor-multitarget recursive Bayes filter. Finally, it describes how multitarget calculus is used to derive principled statistical approximations of this optimal filter, such as PHD filters, CPHD filters, and multi-bernoulli filters. Index Terms multitarget tracking, multitarget detection, data fusion, finite-set statistics, FISST, random sets. I. INTRODUCTION This paper is a sequel to, and update of, a tutorial published in 004 for the IEEE Aerospace and Electronic Systems Magazine [1]. That tutorial described some central ideas of finite-set statistics (FISST). Finite-set statistics is a systematic, unified approach to multisensor-multitarget detection, tracking, and information fusion. It has been the subject of considerable worldwide research interest during the last decade, including more than 600 research publications by researchers in more than a dozen nations. I attribute this interest to the fact that finite-set statistics: is based on explicit, comprehensive, unified statistical models of multisensor-multitarget systems; unifies two disparate goals of multitarget tracking target detection and state-estimation into a single, seamless, Bayes-optimal procedure; results in new multitarget tracking algorithms PHD filters, CPHD filters, multi-bernoulli filters, etc. that do not require measurement-to-track association, while still achieving tracking performance (localization accuracy, speed) comparable to or better than conventional multitarget tracking algorithms; results in promising generalized CPHD and multi- Bernoulli filters that can operate in unknown, dynamically changing clutter and detection backgrounds; and Copyright (c) 013 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org. R. Mahler is with Lockheed Martin Advanced Techology Laboratories, Eagan, MN. ronald.p.mahler@lmco.com. more generally, has been a fertile source of fundamentally new approaches in multisource-multitarget tracking and information fusion. The emphasis of the earlier tutorial was on the answers to three questions: How does one Bayes-optimally 1 detect and track multiple noncooperative targets using multiple, imperfect sensors? How does one correctly model multisensor-multitarget systems so that Bayes-optimality is possible? How does one accomplish this using a Statistics like formalism that is specifically designed for solving multitarget tracking and data fusion problems? The answer to the first question the multisourcemultitarget Bayes recursive filter is computationally intractable in all but the simplest problems. The answers to the second and third questions multitarget formal Bayes modeling and multitarget integro-differential calculus, respectively were addressed only at a very high level. Thus this paper begins where the previous one left off, with emphasis on answers to the following, consequent questions: How does one actually construct faithful Bayesian models of multisensor-multitarget systems? How does one approximate the optimal multisourcemultitarget Bayes recursive filter in a principled statistical manner meaning that the underlying models and their relationships are preserved as faithfully as possible? What mathematical machinery what specific multitarget Statistics 101 methodology makes this possible? The earlier tutorial paper was written at a very elementary level (it was presumed that even Bayes rule might be an unfamiliar concept). This paper continues along the same path, but does presume some basic knowledge. This includes undergraduate probability and calculus, motion and measurement models, probability density functions, likelihood functions, Markov transition densities, and so on. It should also be emphasized that the paper is a tutorial introduction to, not a survey of, finite-set statistics. It includes pointers to the some of the most significant developments, but these are by no means exhaustive. The paper is organized as follows. Section II describes the engineering philosophy that motivates finite set statistics. Section III presents a review of Statistics 101 for single-sensor, single-target systems, focusing on the single-sensor, singletarget recursive Bayes filter. Section IV summarizes its generalization to multisensor-multitarget Statistics 101, focusing 1 By Bayes-optimal, I mean that target state(s) are determined by a state estimator, applied to the posterior distribution of a Bayes filter, that minimizes the Bayes risk corresponding to some cost function (see [0], p. 63).

2 101 on motion and measurement modeling and the multisensormultitarget recursive Bayes filter. Section V provides an overview of the primary approximations of this filter, the PHD, CPHD, and multi-bernoulli filters. Section VI summarizes the principled statistical approximation methodology that leads to these filters. Conclusions can be found in Section VII. II. THE PHILOSOPHY OF FINITE-SET STATISTICS Multisensor, multitarget systems introduce a major complication that is absent from single-sensor, single-target problems: they are comprised of randomly varying numbers of randomly varying objects of various kinds. These include varying numbers of targets; varying numbers of sensors with varying number of sensor measurements collected by each sensor; and varying numbers of sensor-carrying platforms. A rigorous mathematical foundation for stochastic multiobject problems point process theory [4], [37] has been in existence for a half-century. However, this theory has traditionally been formulated with the requirements of mathematicians rather than tracking and data fusion engineers in mind. The formulation usually preferred by mathematicians random counting-measures is inherently abstract and complex (especially in regard to probabilistic foundations) and not easily assimilable with engineering physical intuition. A. Motivations and Objectives The fundamental motivation for finite-set statistics is: tracking and information fusion R&D engineers should not have to be virtuoso experts in point process theory in order to produce meaningful engineering innovations. As was emphasized in [1], engineering statistics is a tool and not an end in itself. It must have two qualities: Trustworthiness: Constructed upon a systematic, reliable mathematical foundation, to which we can appeal when the going gets rough. Fire and forget: This foundation can be safely neglected in most situations, leaving a serviceable mathematical machinery in its place. These two qualities are inherently in tension. If foundations are so mathematically complex that they cannot be taken for granted in most engineering situations, then they are shackles and not foundations. But if they are so simple that they repeatedly result in engineering blunders, then they are simplistic rather than simple. This inherent gap between trustworthiness and engineering pragmatism is what finite-set statistics attempts to bridge. Four objectives are paramount: Directly generalize familiar single-sensor, single-target Bayesian Statistics 101 concepts to the multisourcemultitarget realm. Avoid all avoidable abstractions. As much as possible, replace theorem-proving with mechanical, turn-the-crank, purely algebraic procedures. Nevertheless retain all mathematical power necessary for effective engineering problem-solving. B. Overview It is worthwhile to begin by first comparing the FISST random finite set (RFS) paradigm with the ubiquitous conventional paradigm: report-to-track association (MTA). 1) The Standard Measurement Model : The most familiar tracking algorithms presume the following measurement model, one that has its origins in radar tracking. A radar amplitude-signature for a given range-bin, azimuth, and elevation is subjected to a thresholding procedure such as CFAR (constant false alarm rate). If the amplitude exceeds the threshold, there are two possible reasons for the existence of this blip. First, it was caused by an actual target in which case a target detection has occurred at z =( ). Second, it was caused by a momentary surge of background noise in which case a false detection or false alarm has occurred at z. A third possibility that a target was present but was not detected is referred to as a missed detection. For target-generated detections, the small target model is presumed. Targets are distant enough (relative to the radar s resolution capability) that a single target generates a single detection. But they are also near enough that only a single target is responsible for any detection. ) Measurement-to-Track Association (MTA): Because of the small-target assumption, a bottom-up, divide and conquer strategy can be applied to the multitarget detection and tracking problem ([0]. pp ). Suppose that, at time, we are in posession of tracks ( 1 x 1 1 )( x ) i.e., possible targets where, for the track, x is its state (position, velocity), its error covariance matrix, and its track label. The Gaussian distribution (x) = (x x ) is the track density of the track. Next, suppose that at time +1 we collect detections = {z 1 z }. Typically, because of false alarms. The prediction step of an extended Kalman filter (EKF) is used to construct predicted tracks ( 1 x ) ( x ). We can then construct the following hypothesis : for each, the target ( x ) generated the detection z () ; or, alternatively, generated no detection. The excess measurements {z 1 z } {z (1) z () } are interpreted as false alarms or as having been generated by previously undetected targets. The hypothesis is a MTA. Taking all possibilities into account, we end up with a list of MTAs. For each +1 +1, we can apply the update step of an EKF to use z () to construct a revised track ( () x () () ). Multi-hypothesis trackers (MHTs) are currently the dominant tracking algorithms based on MTA. 3) Association-Free Multitarget Detection and Tracking : In contrast to MTA, FISST employs a top-down paradigm grounded in point process theory specifically, in the theory of random finite sets (RFS s). In place of the hypothesis-list , one has a probability distribution ( () ) on the finite-set variable = {x 1 x } with 0, where () : 1 is the time-history of measurement-sets at time. Instead of the standard

3 measurement model just described, one constructs from it a multitarget likelihood function () = +1 ( ). The value () is the likelihood that a measurement-set will be generated, if targets with state-set are present. Given this, a multitarget version of the recursive Bayes filter (Section IV-C) is applied instead of the MTA procedure. Since this Bayes filter will in general be computationally intractable, it must be approximated, resulting in the PHD, CPHD, multi- Bernoulli and other filters (Section V). The following point should be emphasized: RFS algorithms are capable of true tracking. It is often, to the contrary, asserted that RFS algorithms are inherently incapable of constucting time-sequences of labeled tracks, because finite sets are order-independent. This is not the case. As was explained in [0], pp target states have, in general, the form x =( x), where is a identifying label unique to each track. Given this, the multitarget Bayes filter as well as any RFS approximation of it, including PHD and CPHD filters can maintain temporally-connected tracks. In particular, Vo and Vo have used this approach to devise an exact, closed-form, computationally tractable solution to the multitarget recursive Bayes filter [4], [43]. Because the solution is exact, this filter s track-management scheme is provably Bayes-optimal. 4) Nonstandard Measurement Models: However ubiquitous the standard model may be, it is actually an approximation the result of applying a detection process to a sensor signature. RFS models and filters are being developed for nonstandard sensor sources that supply raw signature information. Perhaps the two most interesting instances are: RFS models and multi-bernoulli track-before-detect (TBD) filters for pixelized image data [11], [1]. These filters have been shown to outperform the previously-best TBD filter, the histogram-pmht filter. RFS models and CPHD filters for superpositional sensors [4], [35], [47]. These filters have been shown to significantly outperform a conventional MCMC (Markov chain Monte Carlo) approach, while also being much faster. III. SINGLE-SENSOR, -TARGET SYSTEMS The purpose of this section is to summarize the basic elements of the conventional Statistics 101 toolbox. A. Single-Sensor, Single-Target Recursive Bayes Filter The primary tool is the recursive Bayes filter the foundation for optimal single-sensor, single-target tracking. At various times 1, a single sensor with unity probability of detection and no clutter, interrogates a single noncooperative target. The time-sequence of collected measurements is : z 1 z and the state of the target the information about it that we wish to know (position, velocity, type, etc.) is x. The Bayes filter propagates a Bayes posterior distribution through time: (x ) predictor +1 (x ) corrector (x +1 ) estimator ˆx Here, (x ) is the probability (density) that the target has state x, given the accumulated information. The predictor (time-update) step accounts for the increase in uncertainty in the target state between measurement collections. The corrector (measurement-update) step permits fusion of the newest measurement z +1 with previous measurements. These steps are definedbythetime-predictionintegral +1 (x )= +1 (x x 0 ) +1 (x 0 )x 0 (1) and Bayes rule (x +1 )= +1(z +1 x) (x ) +1 (z +1 () ) where +1 (z +1 )= +1 (z +1 x) (x )x (3) is the Bayes normalization factor. The estimator step consists of a Bayes-optimal state estimator, such as the maximum a posteriori (MAP) estimator: x MAP =argsup (x +1 ) (4) x ( Bayes-optimal means that the estimator minimizes the Bayes risk corresponding to some cost function [0], p. 63.) The Bayes filter formulas require knowledge of two a priori density functions: the target Markov transition density +1 (x x 0 ) and the sensor likelihood function +1 (z x). The former, +1 (x x 0 ), is the probability (density) that the target will have state x at time +1 if it had state x 0 at time. The latter, +1 (z x), is the probability (density) that the sensor will collect measurement z at time +1 if a target with state x is present. By true formulas for +1 (x x 0 ) and +1 (z x) is meant: +1 (x x 0 ) and +1 (z x) faithfully incorporate the motion and measurement models; and no extraneous information has inadvertently been introduced into them. B. Moment Approximations of the Bayes Filter Historically, the Bayes filter has typically been implemented using moment approximations. Let (x) = 0 (x x 0 ) denote a Gaussian distribution with mean x 0 (first-order moment of (x)) and covariance matrix 0 (a second-order moment of (x)). Assume that signal-to-noise ratio (SNR) is large enough that the track distributions can be approximately characterized by their first- and second-order moments: (x ) = (x x ) (5) +1 (x ) = +1 (x x +1 ) (6)

4 Then the Bayes filter can be replaced by a filter the extended Kalman filter (EKF) that propagates the first-and secondorder moments: x x +1 x Similarly, assume that SNR is large enough that the track distributions can be approximately characterized by their firstorder moments: (x ) = (x x ) (7) +1 (x ) = (x x +1 ) (8) for some fixed covariance. Then the Bayes filter can be replaced by a filter for example, an - filter that propagates only the first-order moment: x x +1 x C. Single-Target Motion Modeling Target modeling is schematically summarized in Figure 1. At the top, interim target motion is mathematized as a statistical motion model. The function x = (x 0 ) states that the target will have state x at time +1 if it had state x 0 at time. Since this equation is typically just a guess, it is randomly perturbed by the motion noise ( plant noise ) W with probability distribution W (x). The information contained in this model is equivalent to that contained in the next line, the probability mass function (p.m.f.) +1 ( x 0 )=Pr(X +1 x 0 ) (9) The p.m.f. is equivalent to the probability density function (p.d.f.) +1 (x x 0 ): +1 (x x 0 )= W (x (x 0 )) (10) This formula is a standard result easily found in standard textbooks. It is a consequence of the following equation: +1 ( x 0 )= +1 (x x 0 )x (11) The validity of Eq. (11) ensures that Eq. (10) is true because it means that +1 ( x 0 ) and +1 (x x 0 ) are entirely equivalent statistical descriptors of X +1. motion model probabilitymass function true Markov density Bayes filter predictor X +1 {z } = (x 0 ) + W {z } {z} predicted target deterministic plant noise +1 ( x 0 ) =Pr(X +1 x 0 ) R x +1 x single-object calculus x (x 0 )= +1 (x x 0 ) single-target prediction integral (x ) +1 (x ) Figure 1: Single-Sensor, Single-Target Motion Modeling D. Single-Sensor, Single-Target Measurement Modeling Sensor modeling is schematically summarized in Figure. We begin with a statistical measurement model. The function z = +1 (x) states that the sensor will collect measurement z at time +1 if a target with state x is present. Because of sensor noise, this formula must be randomly perturbed by a noise-vector V with distribution V+1 (z). The information in this model is equivalent to that in the p.m.f. +1 ( x) =Pr( +1 x) (1) This p.m.f. and thus the original measurement model is equivalent to the p.d.f. +1 (z x): +1 (z x) = V+1 (z +1 (x)) (13) The fact that this formula is true is assured by the equation +1 ( x) = +1 (z x)z (14) measurement model probabilitymass function true likelihood function Bayes filter corrector {z +1 } = +1 (x) + V {z } {z +1 } measurement deterministic sensor noise +1 z +1 ( x) =Pr( +1 x) R z single-object calculus z (x) = +1 (z x) single-target Bayes rule +1 (x ) (x +1 ) Figure : Single-Sensor, Single-Target Measurement Modeling E. Single-Target, Multisensor Data Fusion Suppose that we have two sensors, with as in the singlesensor case unity probability of detection and no clutter. Measurement-collection times are identical (synchronous), so 1 that the sensors collect measurement-streams and with z 1 and z collected simultaneously at time for any =1. Let the respective likelihood functions be 1 1z (x) abbr. = 1 ( 1 z x), z (x) abbr. = ( z x) (15) If the sensors are independent, then their joint likelihood function has the form 1 1z z (x) = 1 1z (x) z (x) (16) Measurements are optimally fused using Bayes rule: = ) (17) (x 1 1 1z (x) z 1 (x 1 1 ( 1 z z ) 1 )

5 where = ( z 1 z z (x) z 1 (x ) (18) 1 )x This can be accomplished as follows. Let x be an arbitrarily small region around x of size x.then +1 ( x x 0 ) = +1 (x x 0 )x x (4) = +1 (x x 0 ) x (5) F. Data Fusion Via Averaging? The Bayes-optimal fusion approach, Eq. (16), employs a product likelihood. Consider the counterproposal in [38] that one should, instead, use an average: 1 1z z (x) = 1 ( 1 1z (x) + z (x)) (19) One reviewer of this paper objected that this method hardly merits mention, because data fusion averaging is not a sensible approach. Readers patience is nevertheless requested, because it is being explicitly or implicitly promoted by rather powerful individuals. Eq. (19) is problematic because whereas product-likelihoods inherently improve target localization, average-likelihoods inherently worsen it. Consider the following simple example: two bearing-only sensors in the plane with respective Gaussian likelihood functions 1 ( ) = ( ), ( ) = ( ) (0) That is, the sensors are oriented so as to triangulate the position of a target located at ( ). For conceptual clarity, let the prior distribution be +1 ( ) = 0 ( 0 ) 0 ( 0 ) (1) where 0 is arbitrarily large so that +1 ( ) is effectively uniform. Let 1 be the measurements collected by the sensors. Then Bayes rule yields Bayes ( ) = ( 1 ) ( ) () This results in a triangulated localization at ( 1 ) with variance =. But with the average likelihood, av ( ) = 1 ( 1 ) ( 0) ( 0 ) ( ) (3) This distribution has four tails whose lengths increase with the size of its variance, which is = 0. Now apply additional bearing-only sensors, all with orientations different from the first two and each other. The variance increases with the number of averaged sensors whereas it greatly decreases if Bayes rule is used instead. As we shall see in Section V-C, a generalization of Eq. (19) is what leads to the very poor performance of the averagebased multisensor PHD filter proposed in [38]. G. Constructing Markov Densities and Likelihoods Eqs. (10,13) do not tell us how to construct explicit formulas for +1 (x x 0 ) and +1 (z x) from explicit formulas for +1 ( x 0 ) and +1 ( x). and therefore +1 (x x 0 )= lim x &0 +1 ( x x 0 ) (6) x This, the Lebesgue differentiation theorem, provides one (but not the only) way of deriving +1 (x x 0 ) from +1 ( x 0 ) using a constructive Radon-Nikodým derivative ([9], pp ). For the model in Figure 1 we have +1 ( x x 0 ) = Pr( (x 0 )+W x ) (7) = W (x (x 0 )) x (8) from which Eq. (10) follows. IV. MULTISENSOR-MULTITARGET SYSTEMS The purpose of this section is to summarize the basic elements of multisensor-multitarget Statistics 101. A. Random Finite Sets (RFS s) This section introduces the concept of an RFS as the multitarget analog of a random vector. 1) Random Single-Target States : The state x of a singletarget system may (as an example) have the form x = ( ) where are position variables, are velocity variables, and is a discrete identity variable (which could be, for instance, a track label). In a Bayesian approach, the state at time must be a random state X. The precise mathematical definition of a random state X requires that it actually be a measurable mapping from a probability space to the state space. In turn, the state space must be equipped with a topology, typically (but not always) a Euclidean topology. While such details are mathematically necessary, for engineering purposes they can usually be taken for granted. ) Random Multitarget States : The state of a multitarget system, on the other hand, is most accurately represented as a finite set of the form = {x 1 x }. Here, not only the individual target states x 1 x are random but also their number (cardinality). This includes the possibility =0 (no targets are present), in which case we write = (the null set). The finite-set representation is most natural because given that each target already has its own unique identity, as indicated by a variable such as from a physical point of view the targets have no inherent order. Thus in a Bayesian formulation, a state-set is actually a random state-set Ξ it is a random finite set (RFS). Similar comments apply to the measurements collected from the targets by a sensor. These also usually have no inherent physical ordering. They thus have the form = {z 1 z }, where not only the individual measurements z 1 z are random, but also their number 0. Thus

6 in a Bayesian development a measurement-set is actually a random measurement-set Σ +1 an RFS. The RFS representation is more engineering friendly than the random-measure representation of standard point process theory. A finite set {x 1 x } is easily visualizable as a point pattern for example, in the plane or in three dimensions. Similarly, an RFS is easily visualizable as a random point pattern. An everyday example of an RFS: the stars in a night sky, with many stars winking in and out and/or slightly varying in their apparent position ([0], pp ). 3) Fire-and-Forget Foundations of RFS s : If we are to have a trustworthy mathematical foundation for multisensormultitarget systems, we must precisely define RFS s. This forces us to define topologies on so-called hyperspaces that is, spaces X whose points are subsets (in our case finite subsets) of some other space X. Two hyperspaces are of engineering interest: the hyperspace of finite subsets of the state space, and the hyperspace of finite subsets of the measurement space. If we employed the random-measure formulation of point process theory, we would be forced to work with abstract probability measures defined on measurable subsets of an abstract space whose points are counting measures. In an arbitrary RFS formulation, this would be replaced by equally abstract probability measures () =Pr(Ξ ) defined on measurable subsets of X, with any point of being a finite subset of X. Luckily, a simpler stochastic geometry formulation [37] is available. Its hyperspace topology the Fell-Matheron topology allows () to be equivalently replaced by the multitarget analog of a conventional probability-mass function () =Pr(X ). This is the belief-mass function (b.m.f.) () =Pr(Ξ ) (9) which is defined on (closed) subsets of X. 4) Fire-and-Forget Foundations of Multitarget Tracking : The upshot of Eq. (9) is that, in finite-set statistics, it is usually possible to entirely avoid abstractions such as topologies, measurable mappings, and the randomness of finite sets in the formal sense. More generally, finite-set statistics is intentionally formulated as a stripped-down version of point process theory one in which we attempt to avoid all avoidable abstractions. As an illustration, concepts such as thinning and marking are basic to purely mathematical treatments of the subject. But in multitarget detection and tracking, these concepts appear only in a few, concrete contexts that can be adequately addressed at a purely engineering level. Missed detections and disappearing targets can both be described as forms of thinning; and target identity as a form of marking. But from an engineering perspective, does the imposition of such concepts represent an increase of content or of pedantry? The reason is as follows. Consider the set function () =1 ( ) = Pr(Ξ 6= ). Then the Choquet-Matheron capacity theorem states that the additive measure () is equivalent to the nonadditive measure (), in the sense that both completely characterize the probabilistic behavior of Ξ (see [9], p. 6 or [0], p. 713). As a second illustration, in FISST density functions are systematically used in place of measures, except when this is not possible. Thus the Dirac delta function is employed even though it produces engineering-heuristic abbreviations of rigorous expressions (as in Eq. (81), for example). B. Probability Distributions of RFS s Just as a random state-vector X has a probability density (x) = X (x), so an RFS has a multitarget probability density function (m.p.d.f.) () = Ξ () (30) Its form varies with the number of targets: ( ) if = ({x 1 }) if = {x 1 } () = ({x 1 x }) if =, = {x 1 x }.. (31) where denotes the number of elements in. Also, its units of measurement vary with target number: if are the units of x, then the units of () are. In general, a function () that satisfies the same property with respect to units is a multitarget density function. Its set integral in the region is definedtobe () (3) = ( ) X 1 + ({x 1 x })x 1 x! 1 {z } times where, as a convention, define ({x 1 x })=0 whenever {x 1 x } 6=. The probability that there are elements in Ξ is () = () (33) = = 1 ({x 1 x })x 1 x (34)! Thus () for 0 is a probability distribution on the number of targets the cardinality distribution of Ξ. C. Multisensor-Multitarget Recursive Bayes Filter This filter is the theoretical foundation for optimal singlesensor, single-target tracking. At times 1, one or more of sensors interrogate an unknown number of unknown, noncooperative target. The time-sequence of collected measurement-sets is () : 1. The multisensormultitarget Bayes filter propagates a multitarget Bayes posterior distribution through time: ( () ) predictor +1 ( () ) corrector ( (+1) ) multitarget estimator ˆ +1 +1

7 Thesestepsaredefined by multitarget analogs of the timeprediction integral +1 ( () )= +1 ( 0 ) +1 ( 0 () ) 0 and of Bayes rule ( (+1) )= +1( +1 ) ( () ) +1 ( +1 () ) (35) (36) where +1 ( +1 () )= +1 ( +1 ) ( () ) (37) The estimator step consists of a multitarget Bayes-optimal state estimator. As was explained in [1], multitarget versions of the maximum a posteriori (MAP) and expected a posteriori (EAP) estimators do not exist. Rather, one must use alternative estimators ([0], pp ). As in the single-sensor, single-target case, the multisensormultitarget Bayes filter requires two aprioridensity functions: the multitarget Markov transition density +1 ( 0 ) and the multisensor-multitarget likelihood function +1 ( ). Here, +1 ( 0 ) is the probability (density) that the targets will have state-set at time +1 if they had stateset 0 at time. Also, +1 ( ) is the probability (density) that the sensors will jointly collect measurement-set at time +1 if targets with state-set are present. In the single-sensor, single-target case, the formulas for the Markov density and likelihood function Eqs. (10,13) are never derived but, rather, simply looked up. In multisensormultitarget problems, this is not possible because no standard references exist. Thus one must ask: How does one construct statistical multitarget motion models for any given application? In particular, how does one model phenomena such as target disappearance and target appearance? How does one construct statistical multisensormultitarget measurement models for any particular set of sensors? In particular, how does one model phenomena such as sensor fields of view and clutter? Given such models, how does one construct formulas for the true multitarget Markov density and the true multisensor-multitarget likelihood function? That is, how does one know that they are not heuristic contrivances, or that no extraneous information has inadvertently been introduced? D. Multitarget Motion Modeling This is summarized in Figure 3. At the top, interim target motions are represented as a RFSmotionmodel. As an example, consider the most commonly assumed multitarget motion model the standard such model. At time suppose that the target state-set is 0 = {x 0 1 x 0 0} with 0 = 0. At time +1, either each of the targets persists or disappears. Let (x 0 ) be the probability that x 0 will persist into time +1 and transition to some other (random) state X. Then x 0 will transition to ½ (x 0 if disappears (prob. 1 (x )= 0 )) {X } if persists (38) Suppose that is the set of targets that newly appear at time +1. Then the RFS motion model has the form Ξ +1 = (x 0 1) (x 0 0) (39) The set-theoretic union symbol indicates that at time +1, targets will be either persisting targets or new targets. It is assumed that (x 0 1) (x 0 0) are independent. The information contained in this model is equivalent to that contained in the next line of Figure 3, the b.m.f. +1 ( 0 ). Because of independence, it is +1 ( 0 )= (x 0 1 ) () (x 0 0 )() () (40) where the b.m.f. of (x 0 ) is (x 0 ) () (41) = Pr( (x 0 ) ) = Pr( (x 0 )= )+Pr( (x 0 ) 6= (x 0 ) )(4) = 1 (x 0 )+ (x 0 ) +1 (x x 0 )x (43) Also, the b.m.f. of is normally chosen to be Poisson: µ () =exp (x)x (44) Here +1 is the expected number of appearing targets; and +1 (x), the spatial distribution, is the probabilty (density) that an appearing target will have state x. A central aspect of finite-set statistics is a set of procedures for deriving the formula for the multitarget Markov density +1 ( 0 ) from the formula for +1 ( 0 ). These two statistical descriptors are related by the equation +1 ( 0 )= +1 ( 0 ) (45) for all. The validity of this equation ensures that the formula for +1 ( 0 ) is true. This is because Eq. (45) states that +1 ( 0 ) and +1 ( 0 ) are equivalent. The explicit formula for +1 ( 0 ) is too complicated to state here see [0], pp

8 multitarget motion model belief-mass function true multitarget Markov density multitarget Bayes filter predictor Ξ +1 = ( 0 ) {z } {z } {z} predicted targets surviving appearing +1 ( 0 ) =Pr(Ξ +1 0 ) R +1 multiobject calculus ( 0 )= +1 ( 0 ) multitarget prediction integral ( () ) +1 ( () ) Figure 3: Multisensor-Multitarget Motion Modeling E. Multisensor-Multitarget Measurement Modeling This is summarized in Figure 4. We begin with a RFS multisensor-multitarget measurement model. Consider the standard such model. Suppose that the state-set for the targets at time +1 is = {x 1 x } with =. It is assumed that each target generates at most a single measurement, and that any measurement is generated by at most a single target. Let (x ) be the probability that the target x generates a measurement. Then the set of measurements Υ +1 (x ) generated by the target with state x will have at most a single element: ½ if undetected (prob. 1 (x Υ (x )= )) { } if detected (46) Suppose that +1 is the set of measurements that are generated by no target i.e., the clutter measurements. Then the RFS measurement model has the form Σ +1 = Υ (x 1 ) Υ (x ) +1 (47) The symbol indicates that measurements consist of target-generated measurements or clutter measurements. It is assumed that Υ (x 1 ) Υ (x ) +1 are statistically independent. Because of independence, the b.m.f. of the RFS model is +1 ( ) = Υ+1 (x 1 )( ) Υ+1 (x )( ) +1 ( ) (48) where the b.m.f. of Υ +1 (x 0 ) is Υ+1 (x )( )=1 (x )+ (x ) +1 (z x )z (49) and the b.m.f. of +1 is usually chosen to be Poisson: +1 ( )=exp µ (z)z (50) Here +1 (z) is the spatial distribution of clutter measurements, and +1 is the expected number of clutter measurements at time +1 (the clutter rate ). The multisensor-multitarget likelihood function +1 ( ) is related to the belief-mass function by the equation +1 ( ) = +1 ( ) (51) The validity of this equation ensures that the formula for +1 ( ) is true, because it shows that +1 ( ) and +1 ( ) are equivalent. The explicit formula for +1 ( ) is too complicated to reproduce here see [0], pp multitarget meas t model belief-mass function true multitarget likelihood function multitarget Bayes filter corrector Σ +1 {z } = +1 (x) +1 {z } {z } measurements targets clutter +1 ( x) =Pr(Σ +1 x) +1 multiobject calculus R () = +1 ( ) multitarget Bayes rule +1 ( () ) ( (+1) ) Figure 8: Multisensor-Multitarget Measurement Modeling F. Multitarget Calculus for Modeling Eqs. (45,51) do not tell us how to construct formulas for +1 ( 0 ) and +1 ( ) from formulas for +1 ( 0 ) and +1 ( ). This is the purpose of multitarget calculus, which generalizes the reasoning used in Section III-G. Let x and x 0 be arbitrarily small regions surrounding def. x and, for any closed subset, let 0 x = x,where 0 indicates set-theoretic difference. Then 0 x and x are disjoint and so +1 ( 0 x x x 0 ) +1 ( 0 x x 0 )= +1 ( x x 0 ) (5) andsofromeq.(6), (x x 0 ) (53) = lim 0 x &0 lim x &0 +1 ( 0 x x x 0 ) +1 ( 0 x x 0 ) x 1) Set Derivatives : For any real-valued set function () define the generalized Radon-Nikodým derivative ([9], pp ): x () = lim 0 x &0 lim x &0 ( 0 x x ) ( 0 x ) (54) x Extend this definition as follows. For any = {x 1 x } with = 1, define () = () (55) x 1 x

9 and, if =, define () =() (56) Then Eqs. (54-56) define the set derivative of () with respect to ([9], p ). The set derivative is the inverse operation of the set integral: () = ( ) (57) µ ( ) = () (58) ) Construction of True Markov Densities and Likelihood Functions : It is possible to derive turn-the-crank rules of differentiation for set derivatives: sum rules, power rules, product rules, chain rules, etc. ([0], pp ). These rules permit the explicit construction of formulas for multitarget Markov densities and multitarget likelihood functions. This is because of the following two formulas, which are direct consequences of Eqs. (57,58): +1 ( 0 ) = (59) +1 ( ) = +1 ( 0 ) +1( ) = (60) = V. PRINCIPLED APPROXIMATE MULTITARGET FILTERS The multisensor-multitarget recursive Bayes filter of Section IV-C is usually computationally intractable. How can we approximate it in a manner that preserves, as faithfully as possible, the underlying models and their interrelationships? This question is answered by assuming that the m.p.d.f. s ( () ) and/or +1 ( () ) have a particular simplified form one that permits approximate closed-form solution of the multitarget Bayes filter. Three types of approximation have been extensively investigated in the literature thus far: Poisson, independent identically distributed cluster (i.i.d.c.), and multi-bernoulli. They result in, respectively, PHD filters, CPHD filters, and multi-bernoulli filters. The purpose of this section is to summarize these filters, as well as their extensions to unknown clutter and detection profiles. A. Probability Hypothesis Density (PHD) Filters Recall from Section III-B that - filters are based on first-moment approximation of the single-sensor, single-target Bayes filter as in Eqs. (7,8). By analogy, we assume that SNR is large enough that the multitarget Bayes filter can be approximated by its first-order statistical moments. Butfirst one must ask: What is the first-order moment of a multitarget probability distribution? 1) Probability Hypothesis Density Functions : The naïve definition of the first-order moment (expected value) of a multitarget distribution would be = ( () ) (61) However, it is mathematically undefined for the simple reason that addition and subtraction ± of finite sets is undefined. Thus instead we must employ an alternative strategy: replace by some doppelgänger for which addition and subtraction is definable. In point process theory, one (intuitively speaking) chooses to be (x) = X y y (x) (6) where y (x) is the Dirac delta function concentrated at y. In this case Eq. (61) is replaced by ([0], pp ) (x () ) = (x) ( () ) (63) = ({x} () ) (64) This function is called a probability hypothesis density (PHD) or first-moment density function. 3 It is completely characterized by the following property: (x () )x = expected no. of targets in (65) Thus the number (x () ) can be understood as the track density at x. A target with state x is more likely to be present in the scene when (x () ) is large than when it is small. Consequently, it is possible to use (x () ) to estimate the number and states of the targets. Let = (x () )x (66) be the total expected number of targets in the scene. Round off to the nearest integer, and then determine those values x 1 x of x that correspond to the highest peaks of (x () ). Then ˆ = {x 1 x } is an estimate of the number of targets and their states. ) Example of a PHD : Suppose that, in a one-dimensional scenario, the multitarget distribution (m.p.d.f.) corresponds to two targets located at = 1 and = : ({ }) = 1) ) (67) + ) 1) Then the corresponding PHD can be shown to be: () = ( 1)+ ( ) (68) 3) PHD Filters in the General Sense : The m.p.d.f. of a Poisson process has the form () = Y (x) (69) x Rwhere (x) is a density function with integral = (x)x. In analogy with constant-gain Kalman filters Eqs. (7,8) assume that the multitarget densities in the multitarget Bayes filter are all approximately Poisson. Then this 3 PHDs are also known as intensity functions. I avoid this terminology because of the potential for confusion with the many alternative meanings of intensity in tracking and information fusion applications. PHD is a historical usage [18].

10 filter can be approximately replaced by a first-order moment filter of the form (x () ) predictor +1 (x () ) corrector (x (+1) ) Such a filter is called a PHD filter in the general sense. 4) The Classical PHD Filter : The classical PHD filter is a PHD filter with these specific modeling assumptions [18], [0]: (1) a single sensor; () all target motions are independent; (3) measurements are conditionally independent of the target states; (4) the clutter process is Poisson in the sense of Eq. (50) and is independent of other measurements; (5) target-generated measurements are Bernoulli in the sense of Eq. (46); (6) the surviving-target process is Bernoulli in the sense of Eq. (38), and independent of persisting targets; and (7) the appearingtarget process is Poisson in the sense of Eq. (44). Using the methodology in Section VI, it can be shown that the exact 4 time-update equation for the classical PHD filter is ([18], [0], Chapter 16): 5 +1 (x () ) (70) = +1 (x) + (x 0 ) x (x 0 ) +1 (x () )x 0 and, if +1 is the currently-collected measurement-set, that the approximate measurement-update equation is (x (+1) ) = +1 (x) +1 (x () ) (71) Here the PHD pseudolikelihood is +1 (x) =1 (x)+ X (x) z (x) (z)+ +1 (z) z +1 (7) and +1 (z) is the clutter spatial distribution and +1 is the clutter rate. Also, +1 (z) = (x) z (x) +1 (x () )x (73) The classical PHD filter has attractive computational characteristics: its order of complexity is () where is the current number of measurements and is the current number of tracks. The major limitation of the PHD filter is that is not a stable instantaneous estimate of target number its variance is typically large. Thus in practice, must be averaged over a time-window in order to get stable targetnumber estimates. B. Cardinalized PHD (CPHD) Filters The CPHD filter is a generalization of the PHD filter. It has low-variance instantaneous estimates of target number and better tracking performance, but at the cost of greater computational complexity. Its computational order is ( 3 ), although this can be reduced to ( ) using numerical balancing techniques [10]. 4 There seems to be a misconception in some quarters that Eq. (70) is approximate i.e., requires the assumption that the prior multitarget distribution ( () ) be Poisson. This is not true. 5 For clarity, the target-spawning model is neglected. 1) CPHD Filters in the General Sense: The cardinality distribution ( () ) of a multitarget track density ( () ) was defined in Eq. (33). For each, ( () ) is the probability that there are targets in the scene. The multitarget density of an independently identically distributed cluster (i.i.d.c.) process has the form () =! ( ) Y (x) (74) x where () is a cardinality distribution and (x) is a probability density (a spatial distribution ). Assume that the multitarget distribuions in the multitarget Bayes filter are approximately i.i.d.c. Then this filter can be approximately replaced by a higher-order moment filter of the form ½ ½ (x () ) predictor +1 (x ( () () ) ) +1 ( () ) corrector ½ (x (+1) ) ( (+1) ) Any such filter is a CPHD filter in the general sense. ) The Classical CPHD Filter : The classical CPHD filter is a CPHD filter with these specific modeling assumptions [19], [0]: (1) single sensor; () independent target motions; (3) conditionally independent measurements; (4) the clutter process is i.i.d.c. and independent of other measurements; (5) target-generated measurements are Bernoulli in the sense of Eq. (46); (6) surviving targets are Bernoulli in the sense of Eq. (38), and independent of persisting targets; and (7) the appearing-target process is i.i.d.c. Given these assumptions and the methodology in Section VI, one can derive the time- and measurement-update equations for the classical CPHD filter. These equations are beyond the scope of this paper (see [0], Chapter 16). The classical PHD and CPHD filters are most commonly implemented using either Gaussian mixture techniques (assuming moderate motion and/or measurement nonlinearities) or particle methods (for stronger nonlinearities). See [0], Chapter 16, for more details. The classical PHD and CPHD filters have also been employed in hundreds of research papers addressing dozens of applications, far too many to address here. A few diverse examples: ground-target tracking using GMTI (ground moving target indicator) radar [41]; passive-rf air-target racking [40]; satellite-borne optical satellite tracking [6]; audio speaker tracking [13]; underwater monostatic-active sonar [1], []; and underwater multistatic active sonar [8]. C. Multisensor Classical PHD/CPHD Filters The classical PHD and CPHD filters can be extended to the multisensor case. However, these extensions are nontrivial and require special theoretical analysis. The exact approach [3], [17] is combinatorial, and therefore is appropriate only for a small number of sensors. Moratuwage, Vo, and Danwei Wang have successfully employed the exact multisensor PHD filter in a robotics SLAM (simultaneous localization and mapping) application [30].

11 The most common approach, the iterated-corrector method, is heuristic. The PHD filter or CPHD filter corrector equation is applied successively for each sensor. This approach depends on sensor order, but performs well when the sensors probabilities of detection are not too dissimilar. Otherwise, larger- sensors should be applied before smaller- sensors [34]. The parallel combination approximate multisensor (PCAM) approach [14] does not depend on sensor order, is computationally tractable, and results in good tracking performance. Another approach has been proposed [38], in which the PHD pseudolikelihoods of Eq. (7) are averaged. Giventhe discussion in Section III-F, this is an obviously problematic approach that should result in poor target localization which indeed it does. Nagappa and Clark have conducted simulations comparing the approaches just summarized. In a first set of three-sensor simulations, two sensors had = 095 and the third = 09. In decreasing order of performance: PCAM-CPHD, PCAM-PHD, iterated-corrector CPHD, iterated-corrector PHD, averaged-pseudolikelihood PHD. The performance of the averaged-pseudolikelihood PHD filter was particularly bad, with the iterated-corrector PHD filter having intermediate performance. Similar results were observed when the probability of detection of the third sensor was decreased to =085 and again to =07. D. Multi-Bernoulli Filters Suppose that we have target tracks, and that each track has a track distribution (x) and a probability of existence, for =1. Then the multitarget density of a multi- Bernoulli process has the form, for = {x 1 x } with =, () = X Y (x ) 1 (75) where 1 1 6=6= =1 = Y (1 ) (76) =1 Assume that the multitarget distributions in the multitarget Bayes filter are approximately multi-bernoulli. Then it can be approximately replaced by a multi-bernoulli filter in the general sense: { (x)} { (x)} { (x)} The first such filter was proposed in [0], Chapter 17 but, because of an ill-conceived linearization step, exhibited a pronounced bias in the target-number estimate. Vo, Vo, and Cantoni corrected this bias with their cardinality-balanced multitarget multi-bernoulli (CBMeMBer) filter [44]. The CBMeMBer filter appears to be well-suited for applications in which motion and/or measurement nonlinearities are strong and therefore sequential Monte Carlo (a.k.a. particle) implementation techniques must be used [44]. Dunne and Kirubarajan have devised a jump-markov version of this filter [5], and Shanhung Wong, Vo, Vo, and Hoseinnezhad have applied it to road-constrained ground-target tracking [36]. Vo, Vo, Pham, and Suter subsequently devised a multi- Bernoulli track-before-detect (TBD) filter for tracking in pixelized image data, assuming that targets have physical extent and thus cannot overlap [47]. As already noted, this filter outperforms the previously best-known TBD filter, the histogram- PMHT filter. It has been successfully applied to challenging real videos, e.g., hockey and soccer matches [1], [11]. E. Background-Agnostic PHD and CPHD Filters The PHD, CPHD, and multi-bernoulli filters all require a priori models of both the clutter (in the form of a clutter spatial distribution +1 (z) and clutter rate +1 )andthe detection profile (in the form of a state-dependent probability of detection (x)). A series of second-generation PHD and CPHD filters do not require apriorimodels. Any PHD, CPHD, or multi-bernoulli filter can be transformed into a filter that can operate when the probability of detection is unknown and/or dynamic [5]. The basic idea is simple: replace the target state x by an augmented target state x = ( x), where 0 1 is the unknown probability of detection associated with a target with state x. The resulting probability of detection agnostic (PDAG) PHD, CPHD, and multi-bernoulli filters have the same form as before, except that the PHDs and spatial distributions have the form ( x) and ( x). The PDAG-CPHD filter has been successfully implemented in [8]. Any PHD, CPHD, or multi-bernoulli filter can be transformed into one that can operate when the clutter background is unknown and/or dynamic [6]. One simply replaces the target state space by a state space that includes both targets and clutter generators. The clutter generators are assumed to be target-like in that their measurement-generation process is Bernoulli in the sense of Eq. (46). In this case, any state x of the joint target-clutter system can have two forms: x = x or x = c where c is the state of a clutter generator. One defines suitable extensions z ( x) and +1 ( x x 0 ) of the likelihood function and Markov density. Targets must not transition to target generators, and vice-versa: +1 (x c 0 )=0, +1 (c x 0 )=0 (77) Otherwise, target statistics and clutter statistics would be inherently intermixed, thus making it more difficult to distinguish targets from clutter. The filtering equations for clutter agnostic (CAG) versions of the PHD, CPHD, and multi- Bernoulli filters can then be derived using simple algebra. A version of the CAG-CPHD filter was successfully implemented in [8]. The PDAG and CAG approaches can be combined, thus allowing any PHD, CPHD, or multi-bernoulli filter to operate under general background-agnostic (GBAG) conditions. Various versions of the GBAG-CPHD filter have been successfully implemented in [6], and likewise for a GBAG-CMeMBer filter in [45]. A related development is the multitarget intensity filter (MIF) or ifilter [39]. It is actually a CAG-PHD filter, except that Eqs. (77) are violated: clutter generators are (problematically) allowed to become targets, and vice-versa.

Statistical Multisource-Multitarget Information Fusion

Statistical Multisource-Multitarget Information Fusion Statistical Multisource-Multitarget Information Fusion Ronald P. S. Mahler ARTECH H O U S E BOSTON LONDON artechhouse.com Contents Preface Acknowledgments xxm xxv Chapter 1 Introduction to the Book 1 1.1

More information

BAYESIAN MULTI-TARGET TRACKING WITH SUPERPOSITIONAL MEASUREMENTS USING LABELED RANDOM FINITE SETS. Francesco Papi and Du Yong Kim

BAYESIAN MULTI-TARGET TRACKING WITH SUPERPOSITIONAL MEASUREMENTS USING LABELED RANDOM FINITE SETS. Francesco Papi and Du Yong Kim 3rd European Signal Processing Conference EUSIPCO BAYESIAN MULTI-TARGET TRACKING WITH SUPERPOSITIONAL MEASUREMENTS USING LABELED RANDOM FINITE SETS Francesco Papi and Du Yong Kim Department of Electrical

More information

Generalizations to the Track-Oriented MHT Recursion

Generalizations to the Track-Oriented MHT Recursion 18th International Conference on Information Fusion Washington, DC - July 6-9, 2015 Generalizations to the Track-Oriented MHT Recursion Stefano Coraluppi and Craig Carthel Systems & Technology Research

More information

Incorporating Track Uncertainty into the OSPA Metric

Incorporating Track Uncertainty into the OSPA Metric 14th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 211 Incorporating Trac Uncertainty into the OSPA Metric Sharad Nagappa School of EPS Heriot Watt University Edinburgh,

More information

Multiple Model Cardinalized Probability Hypothesis Density Filter

Multiple Model Cardinalized Probability Hypothesis Density Filter Multiple Model Cardinalized Probability Hypothesis Density Filter Ramona Georgescu a and Peter Willett a a Elec. and Comp. Engineering Department, University of Connecticut, Storrs, CT 06269 {ramona, willett}@engr.uconn.edu

More information

Rao-Blackwellized Particle Filter for Multiple Target Tracking

Rao-Blackwellized Particle Filter for Multiple Target Tracking Rao-Blackwellized Particle Filter for Multiple Target Tracking Simo Särkkä, Aki Vehtari, Jouko Lampinen Helsinki University of Technology, Finland Abstract In this article we propose a new Rao-Blackwellized

More information

Technical report: Gaussian approximation for. superpositional sensors

Technical report: Gaussian approximation for. superpositional sensors Technical report: Gaussian approximation for 1 superpositional sensors Nannuru Santosh and Mark Coates This report discusses approximations related to the random finite set based filters for superpositional

More information

MATHEMATICS OF DATA FUSION

MATHEMATICS OF DATA FUSION MATHEMATICS OF DATA FUSION by I. R. GOODMAN NCCOSC RDTE DTV, San Diego, California, U.S.A. RONALD P. S. MAHLER Lockheed Martin Tactical Defences Systems, Saint Paul, Minnesota, U.S.A. and HUNG T. NGUYEN

More information

Cardinality Balanced Multi-Target Multi-Bernoulli Filtering Using Adaptive Birth Distributions

Cardinality Balanced Multi-Target Multi-Bernoulli Filtering Using Adaptive Birth Distributions Cardinality Balanced Multi-Target Multi-Bernoulli Filtering Using Adaptive Birth Distributions Stephan Reuter, Daniel Meissner, Benjamin Wiling, and Klaus Dietmayer Institute of Measurement, Control, and

More information

A Random Finite Set Conjugate Prior and Application to Multi-Target Tracking

A Random Finite Set Conjugate Prior and Application to Multi-Target Tracking A Random Finite Set Conjugate Prior and Application to Multi-Target Tracking Ba-Tuong Vo and Ba-Ngu Vo School of Electrical, Electronic and Computer Engineering The University of Western Australia, Crawley,

More information

The Cardinality Balanced Multi-Target Multi-Bernoulli Filter and its Implementations

The Cardinality Balanced Multi-Target Multi-Bernoulli Filter and its Implementations PREPRINT: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 57, NO. 2, PP. 49-423, 29 1 The Cardinality Balanced Multi-Target Multi-Bernoulli Filter and its Implementations Ba-Tuong Vo, Ba-Ngu Vo, and Antonio

More information

Multi-Target Particle Filtering for the Probability Hypothesis Density

Multi-Target Particle Filtering for the Probability Hypothesis Density Appears in the 6 th International Conference on Information Fusion, pp 8 86, Cairns, Australia. Multi-Target Particle Filtering for the Probability Hypothesis Density Hedvig Sidenbladh Department of Data

More information

Random Finite Set Methods. for Multitarget Tracking

Random Finite Set Methods. for Multitarget Tracking Random Finite Set Methods for Multitarget Tracing RANDOM FINITE SET METHODS FOR MULTITARGET TRACKING BY DARCY DUNNE a thesis submitted to the department of electrical & computer engineering and the school

More information

Tracking spawning objects

Tracking spawning objects Published in IET Radar, Sonar and Navigation Received on 10th February 2012 Revised on 10th September 2012 Accepted on 18th December 2012 Tracking spawning objects Ronald P.S. Mahler 1, Vasileios Maroulas

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities

More information

Hybrid multi-bernoulli CPHD filter for superpositional sensors

Hybrid multi-bernoulli CPHD filter for superpositional sensors Hybrid multi-bernoulli CPHD filter for superpositional sensors Santosh Nannuru and Mark Coates McGill University, Montreal, Canada ABSTRACT We propose, for the superpositional sensor scenario, a hybrid

More information

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time

More information

The Multiple Model Labeled Multi-Bernoulli Filter

The Multiple Model Labeled Multi-Bernoulli Filter 18th International Conference on Information Fusion Washington, DC - July 6-9, 215 The Multiple Model Labeled Multi-ernoulli Filter Stephan Reuter, Alexander Scheel, Klaus Dietmayer Institute of Measurement,

More information

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard

Robotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Robotics 2 Target Tracking Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Slides by Kai Arras, Gian Diego Tipaldi, v.1.1, Jan 2012 Chapter Contents Target Tracking Overview Applications

More information

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London Distributed Data Fusion with Kalman Filters Simon Julier Computer Science Department University College London S.Julier@cs.ucl.ac.uk Structure of Talk Motivation Kalman Filters Double Counting Optimal

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Gaussian Mixture PHD and CPHD Filtering with Partially Uniform Target Birth

Gaussian Mixture PHD and CPHD Filtering with Partially Uniform Target Birth PREPRINT: 15th INTERNATIONAL CONFERENCE ON INFORMATION FUSION, ULY 1 Gaussian Mixture PHD and CPHD Filtering with Partially Target Birth Michael Beard, Ba-Tuong Vo, Ba-Ngu Vo, Sanjeev Arulampalam Maritime

More information

A Gaussian Mixture PHD Filter for Nonlinear Jump Markov Models

A Gaussian Mixture PHD Filter for Nonlinear Jump Markov Models A Gaussian Mixture PHD Filter for Nonlinear Jump Marov Models Ba-Ngu Vo Ahmed Pasha Hoang Duong Tuan Department of Electrical and Electronic Engineering The University of Melbourne Parville VIC 35 Australia

More information

Smoothing Algorithms for the Probability Hypothesis Density Filter

Smoothing Algorithms for the Probability Hypothesis Density Filter Smoothing Algorithms for the Probability Hypothesis Density Filter Sergio Hernández Laboratorio de Procesamiento de Información GeoEspacial. Universidad Católica del Maule. Talca, Chile. shernandez@ucm.cl.

More information

CPHD filtering in unknown clutter rate and detection profile

CPHD filtering in unknown clutter rate and detection profile CPHD filtering in unnown clutter rate and detection profile Ronald. P. S. Mahler, Ba Tuong Vo, Ba Ngu Vo Abstract In Bayesian multi-target filtering we have to contend with two notable sources of uncertainty,

More information

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Markov localization uses an explicit, discrete representation for the probability of all position in the state space. Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 1. Reminder and Review of Probability Concepts

ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 1. Reminder and Review of Probability Concepts ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 1. Reminder and Review of Probability Concepts 1 States and Events In an uncertain situation, any one of several possible outcomes may

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Fast Sequential Monte Carlo PHD Smoothing

Fast Sequential Monte Carlo PHD Smoothing 14th International Conference on Information Fusion Chicago, Illinois, USA, July 5-8, 11 Fast Sequential Monte Carlo PHD Smoothing Sharad Nagappa and Daniel E. Clark School of EPS Heriot Watt University

More information

Sequential Monte Carlo methods for Multi-target Filtering with Random Finite Sets

Sequential Monte Carlo methods for Multi-target Filtering with Random Finite Sets IEEE TRANSACTIONS ON AEROSPACE AND EECTRONIC SSTEMS, VO., NO., JUNE 5 Sequential Monte Carlo methods for Multi-target Filtering with Random Finite Sets Ba-Ngu Vo, Sumeetpal Singh, and Arnaud Doucet Abstract

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile

More information

MULTI-OBJECT TRACKING WITH MULTIPLE BIRTH, DEATH, AND SPAWN SCENARIOS USING A RANDOMIZED HYPOTHESIS GENERATION TECHNIQUE (R-FISST)

MULTI-OBJECT TRACKING WITH MULTIPLE BIRTH, DEATH, AND SPAWN SCENARIOS USING A RANDOMIZED HYPOTHESIS GENERATION TECHNIQUE (R-FISST) MULTI-OBJECT TRACKING WITH MULTIPLE BIRTH, DEATH, AND SPAWN SCENARIOS USING A RANDOMIZED HYPOTHESIS GENERATION TECHNIQUE (R-FISST) increases the R-FISST method s ability to handle situations of mass birth

More information

Robotics 2 Data Association. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard

Robotics 2 Data Association. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard Robotics 2 Data Association Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard Data Association Data association is the process of associating uncertain measurements to known tracks. Problem

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Detection ASTR ASTR509 Jasper Wall Fall term. William Sealey Gosset

Detection ASTR ASTR509 Jasper Wall Fall term. William Sealey Gosset ASTR509-14 Detection William Sealey Gosset 1876-1937 Best known for his Student s t-test, devised for handling small samples for quality control in brewing. To many in the statistical world "Student" was

More information

Multi-Target Tracking Using A Randomized Hypothesis Generation Technique

Multi-Target Tracking Using A Randomized Hypothesis Generation Technique Multi-Target Tracing Using A Randomized Hypothesis Generation Technique W. Faber and S. Charavorty Department of Aerospace Engineering Texas A&M University arxiv:1603.04096v1 [math.st] 13 Mar 2016 College

More information

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM

More information

Probability Hypothesis Density Filter for Multitarget Multisensor Tracking

Probability Hypothesis Density Filter for Multitarget Multisensor Tracking Probability Hypothesis Density Filter for Multitarget Multisensor Tracing O. Erdinc, P. Willett, Y. Bar-Shalom ECE Department University of Connecticut ozgur, willett @engr.uconn.edu ybs@ee.uconn.edu Abstract

More information

Tracking and Identification of Multiple targets

Tracking and Identification of Multiple targets Tracking and Identification of Multiple targets Samir Hachour, François Delmotte, Eric Lefèvre, David Mercier Laboratoire de Génie Informatique et d'automatique de l'artois, EA 3926 LGI2A first name.last

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Development of Stochastic Artificial Neural Networks for Hydrological Prediction

Development of Stochastic Artificial Neural Networks for Hydrological Prediction Development of Stochastic Artificial Neural Networks for Hydrological Prediction G. B. Kingston, M. F. Lambert and H. R. Maier Centre for Applied Modelling in Water Engineering, School of Civil and Environmental

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA Contents in latter part Linear Dynamical Systems What is different from HMM? Kalman filter Its strength and limitation Particle Filter

More information

The Sequential Monte Carlo Multi-Bernoulli Filter for Extended Targets

The Sequential Monte Carlo Multi-Bernoulli Filter for Extended Targets 18th International Conference on Information Fusion Washington, DC - July 6-9, 215 The Sequential onte Carlo ulti-bernoulli Filter for Extended Targets eiqin Liu,Tongyang Jiang, and Senlin Zhang State

More information

SINGLE sensor multi-target tracking has received a great amount of attention in the scientific literature. Whenever the

SINGLE sensor multi-target tracking has received a great amount of attention in the scientific literature. Whenever the 1 A multi-sensor multi-bernoulli filter Augustin-Alexandru Saucan, Mark Coates and Michael Rabbat Abstract In this paper we derive a multi-sensor multi-bernoulli (MS-MeMBer) filter for multi-target tracking.

More information

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms L09. PARTICLE FILTERING NA568 Mobile Robotics: Methods & Algorithms Particle Filters Different approach to state estimation Instead of parametric description of state (and uncertainty), use a set of state

More information

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS

RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca

More information

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Particle Filters Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Motivation For continuous spaces: often no analytical formulas for Bayes filter updates

More information

Approximate Inference Part 1 of 2

Approximate Inference Part 1 of 2 Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ Bayesian paradigm Consistent use of probability theory

More information

Improved SMC implementation of the PHD filter

Improved SMC implementation of the PHD filter Improved SMC implementation of the PHD filter Branko Ristic ISR Division DSTO Melbourne Australia branko.ristic@dsto.defence.gov.au Daniel Clark EECE EPS Heriot-Watt University Edinburgh United Kingdom

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

Analytic Implementations of the Cardinalized Probability Hypothesis Density Filter

Analytic Implementations of the Cardinalized Probability Hypothesis Density Filter PREPRINT: IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 55, NO. 7 PART 2, PP. 3553 3567, 27 Analytic Implementations of the Cardinalized Probability Hypothesis Density Filter Ba-Tuong Vo, Ba-Ngu Vo, and

More information

1 Using standard errors when comparing estimated values

1 Using standard errors when comparing estimated values MLPR Assignment Part : General comments Below are comments on some recurring issues I came across when marking the second part of the assignment, which I thought it would help to explain in more detail

More information

Linear Dynamical Systems

Linear Dynamical Systems Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations

More information

Sensor Fusion: Particle Filter

Sensor Fusion: Particle Filter Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,

More information

State-Space Methods for Inferring Spike Trains from Calcium Imaging

State-Space Methods for Inferring Spike Trains from Calcium Imaging State-Space Methods for Inferring Spike Trains from Calcium Imaging Joshua Vogelstein Johns Hopkins April 23, 2009 Joshua Vogelstein (Johns Hopkins) State-Space Calcium Imaging April 23, 2009 1 / 78 Outline

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

BAYESIAN DECISION THEORY

BAYESIAN DECISION THEORY Last updated: September 17, 2012 BAYESIAN DECISION THEORY Problems 2 The following problems from the textbook are relevant: 2.1 2.9, 2.11, 2.17 For this week, please at least solve Problem 2.3. We will

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

Suppression of impulse noise in Track-Before-Detect Algorithms

Suppression of impulse noise in Track-Before-Detect Algorithms Computer Applications in Electrical Engineering Suppression of impulse noise in Track-Before-Detect Algorithms Przemysław Mazurek West-Pomeranian University of Technology 71-126 Szczecin, ul. 26. Kwietnia

More information

Human Pose Tracking I: Basics. David Fleet University of Toronto

Human Pose Tracking I: Basics. David Fleet University of Toronto Human Pose Tracking I: Basics David Fleet University of Toronto CIFAR Summer School, 2009 Looking at People Challenges: Complex pose / motion People have many degrees of freedom, comprising an articulated

More information

Sequential Bayesian Estimation of the Probability of Detection for Tracking

Sequential Bayesian Estimation of the Probability of Detection for Tracking 2th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 Sequential Bayesian Estimation of the Probability of Detection for Tracking Kevin G. Jamieson Applied Physics Lab University

More information

Target Tracking and Classification using Collaborative Sensor Networks

Target Tracking and Classification using Collaborative Sensor Networks Target Tracking and Classification using Collaborative Sensor Networks Xiaodong Wang Department of Electrical Engineering Columbia University p.1/3 Talk Outline Background on distributed wireless sensor

More information

Markov chain Monte Carlo methods for visual tracking

Markov chain Monte Carlo methods for visual tracking Markov chain Monte Carlo methods for visual tracking Ray Luo rluo@cory.eecs.berkeley.edu Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720

More information

A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling

A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling A Statistical Input Pruning Method for Artificial Neural Networks Used in Environmental Modelling G. B. Kingston, H. R. Maier and M. F. Lambert Centre for Applied Modelling in Water Engineering, School

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

STAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed.

STAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed. STAT 302 Introduction to Probability Learning Outcomes Textbook: A First Course in Probability by Sheldon Ross, 8 th ed. Chapter 1: Combinatorial Analysis Demonstrate the ability to solve combinatorial

More information

Dynamic System Identification using HDMR-Bayesian Technique

Dynamic System Identification using HDMR-Bayesian Technique Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in

More information

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models

A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes (bilmes@cs.berkeley.edu) International Computer Science Institute

More information

Probability and Information Theory. Sargur N. Srihari

Probability and Information Theory. Sargur N. Srihari Probability and Information Theory Sargur N. srihari@cedar.buffalo.edu 1 Topics in Probability and Information Theory Overview 1. Why Probability? 2. Random Variables 3. Probability Distributions 4. Marginal

More information

Fundamentals of Data Assimila1on

Fundamentals of Data Assimila1on 014 GSI Community Tutorial NCAR Foothills Campus, Boulder, CO July 14-16, 014 Fundamentals of Data Assimila1on Milija Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University

More information

Approximate Inference Part 1 of 2

Approximate Inference Part 1 of 2 Approximate Inference Part 1 of 2 Tom Minka Microsoft Research, Cambridge, UK Machine Learning Summer School 2009 http://mlg.eng.cam.ac.uk/mlss09/ 1 Bayesian paradigm Consistent use of probability theory

More information

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection REGLERTEKNIK Lecture Outline AUTOMATIC CONTROL Target Tracking: Lecture 3 Maneuvering Target Tracking Issues Maneuver Detection Emre Özkan emre@isy.liu.se Division of Automatic Control Department of Electrical

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise

MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise MMSE-Based Filtering for Linear and Nonlinear Systems in the Presence of Non-Gaussian System and Measurement Noise I. Bilik 1 and J. Tabrikian 2 1 Dept. of Electrical and Computer Engineering, University

More information

Autonomous Navigation for Flying Robots

Autonomous Navigation for Flying Robots Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state

More information

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms L11. EKF SLAM: PART I NA568 Mobile Robotics: Methods & Algorithms Today s Topic EKF Feature-Based SLAM State Representation Process / Observation Models Landmark Initialization Robot-Landmark Correlation

More information

A Tree Search Approach to Target Tracking in Clutter

A Tree Search Approach to Target Tracking in Clutter 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 A Tree Search Approach to Target Tracking in Clutter Jill K. Nelson and Hossein Roufarshbaf Department of Electrical

More information

Based on slides by Richard Zemel

Based on slides by Richard Zemel CSC 412/2506 Winter 2018 Probabilistic Learning and Reasoning Lecture 3: Directed Graphical Models and Latent Variables Based on slides by Richard Zemel Learning outcomes What aspects of a model can we

More information

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p.

Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. Preface p. xiii Acknowledgment p. xix Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. 4 Bayes Decision p. 5

More information

Particle Filters. Outline

Particle Filters. Outline Particle Filters M. Sami Fadali Professor of EE University of Nevada Outline Monte Carlo integration. Particle filter. Importance sampling. Degeneracy Resampling Example. 1 2 Monte Carlo Integration Numerical

More information

A Sufficient Comparison of Trackers

A Sufficient Comparison of Trackers A Sufficient Comparison of Trackers David Bizup University of Virginia Department of Systems and Information Engineering P.O. Box 400747 151 Engineer's Way Charlottesville, VA 22904 Donald E. Brown University

More information

Estimating the Shape of Targets with a PHD Filter

Estimating the Shape of Targets with a PHD Filter Estimating the Shape of Targets with a PHD Filter Christian Lundquist, Karl Granström, Umut Orguner Department of Electrical Engineering Linöping University 583 33 Linöping, Sweden Email: {lundquist, arl,

More information

Multi-target Multi-Bernoulli Tracking and Joint. Multi-target Estimator

Multi-target Multi-Bernoulli Tracking and Joint. Multi-target Estimator Multi-target Multi-Bernoulli Tracing and Joint Multi-target Estimator MULTI-TARGET MULTI-BERNOULLI TRACKING AND JOINT MULTI-TARGET ESTIMATOR BY ERKAN BASER, B.Sc., M.Sc. a thesis submitted to the department

More information

The Marginalized δ-glmb Filter

The Marginalized δ-glmb Filter The Marginalized δ-glmb Filter Claudio Fantacci, Ba-Tuong Vo, Francesco Papi and Ba-Ngu Vo Abstract arxiv:5.96v [stat.co] 6 Apr 7 The multi-target Bayes filter proposed by Mahler is a principled solution

More information

Remaining Useful Performance Analysis of Batteries

Remaining Useful Performance Analysis of Batteries Remaining Useful Performance Analysis of Batteries Wei He, Nicholas Williard, Michael Osterman, and Michael Pecht Center for Advanced Life Engineering, University of Maryland, College Park, MD 20742, USA

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 25: Markov Chain Monte Carlo (MCMC) Course Review and Advanced Topics Many figures courtesy Kevin

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.2 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

The Multiple Model CPHD Tracker

The Multiple Model CPHD Tracker The Multiple Model CPHD Tracker 1 Ramona Georgescu, Student Member, IEEE, and Peter Willett, Fellow, IEEE Abstract The Probability Hypothesis Density (PHD) is a practical approximation to the full Bayesian

More information

Introduction to Mobile Robotics Probabilistic Robotics

Introduction to Mobile Robotics Probabilistic Robotics Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action

More information

STONY BROOK UNIVERSITY. CEAS Technical Report 829

STONY BROOK UNIVERSITY. CEAS Technical Report 829 1 STONY BROOK UNIVERSITY CEAS Technical Report 829 Variable and Multiple Target Tracking by Particle Filtering and Maximum Likelihood Monte Carlo Method Jaechan Lim January 4, 2006 2 Abstract In most applications

More information

RESEARCH ARTICLE. Online quantization in nonlinear filtering

RESEARCH ARTICLE. Online quantization in nonlinear filtering Journal of Statistical Computation & Simulation Vol. 00, No. 00, Month 200x, 3 RESEARCH ARTICLE Online quantization in nonlinear filtering A. Feuer and G. C. Goodwin Received 00 Month 200x; in final form

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

Introduction. Chapter 1

Introduction. Chapter 1 Chapter 1 Introduction In this book we will be concerned with supervised learning, which is the problem of learning input-output mappings from empirical data (the training dataset). Depending on the characteristics

More information

Extended Target Tracking Using a Gaussian- Mixture PHD Filter

Extended Target Tracking Using a Gaussian- Mixture PHD Filter Extended Target Tracing Using a Gaussian- Mixture PHD Filter Karl Granström, Christian Lundquist and Umut Orguner Linöping University Post Print N.B.: When citing this wor, cite the original article. IEEE.

More information

Multi-target Tracking for Measurement Models with Additive Contributions

Multi-target Tracking for Measurement Models with Additive Contributions Multi-target Tracking for Measurement Models with Additive Contributions Frederic Thouin, antosh Nannuru, Mark Coates Electrical and Computer Engineering Department McGill University Montreal, Canada Email:

More information

Expectation Propagation in Factor Graphs: A Tutorial

Expectation Propagation in Factor Graphs: A Tutorial DRAFT: Version 0.1, 28 October 2005. Do not distribute. Expectation Propagation in Factor Graphs: A Tutorial Charles Sutton October 28, 2005 Abstract Expectation propagation is an important variational

More information

DETECTION theory deals primarily with techniques for

DETECTION theory deals primarily with techniques for ADVANCED SIGNAL PROCESSING SE Optimum Detection of Deterministic and Random Signals Stefan Tertinek Graz University of Technology turtle@sbox.tugraz.at Abstract This paper introduces various methods for

More information