Generalized optimal sub-pattern assignment metric

Similar documents
A Metric for Performance Evaluation of Multi-Target Tracking Algorithms

Radial Basis Function Networks: Algorithms

4. Score normalization technical details We now discuss the technical details of the score normalization method.

Distributed Rule-Based Inference in the Presence of Redundant Information

Estimation of the large covariance matrix with two-step monotone missing data

arxiv: v1 [physics.data-an] 26 Oct 2012

Elementary Analysis in Q p

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

An Analysis of Reliable Classifiers through ROC Isometrics

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation

State Estimation with ARMarkov Models

Supplementary Materials for Robust Estimation of the False Discovery Rate

Convex Optimization methods for Computing Channel Capacity

Distributed K-means over Compressed Binary Data

Approximating min-max k-clustering

POINTS ON CONICS MODULO p

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

Topic 7: Using identity types

Improved Capacity Bounds for the Binary Energy Harvesting Channel

On Wald-Type Optimal Stopping for Brownian Motion

Uniform Law on the Unit Sphere of a Banach Space

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

Scaling Multiple Point Statistics for Non-Stationary Geostatistical Modeling

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests

Metrics in the Space of High Order Networks

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

Pairwise active appearance model and its application to echocardiography tracking

Sums of independent random variables

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

ECE 534 Information Theory - Midterm 2

Information collection on a graph

Linear diophantine equations for discrete tomography

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i

Positive decomposition of transfer functions with multiple poles

Estimation of Separable Representations in Psychophysical Experiments

Sets of Real Numbers

Uniformly best wavenumber approximations by spatial central difference operators: An initial investigation

Cryptanalysis of Pseudorandom Generators

Towards understanding the Lorenz curve using the Uniform distribution. Chris J. Stephens. Newcastle City Council, Newcastle upon Tyne, UK

Metrics Performance Evaluation: Application to Face Recognition

Some Unitary Space Time Codes From Sphere Packing Theory With Optimal Diversity Product of Code Size

Convex Analysis and Economic Theory Winter 2018

Online Appendix to Accompany AComparisonof Traditional and Open-Access Appointment Scheduling Policies

Deriving Indicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V.

Unsupervised Hyperspectral Image Analysis Using Independent Component Analysis (ICA)

A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split

MATH 2710: NOTES FOR ANALYSIS

The non-stochastic multi-armed bandit problem

On split sample and randomized confidence intervals for binomial proportions

Approximate Dynamic Programming for Dynamic Capacity Allocation with Multiple Priority Levels

Numerical Linear Algebra

Prospect Theory Explains Newsvendor Behavior: The Role of Reference Points

Analysis of some entrance probabilities for killed birth-death processes

Maxisets for μ-thresholding rules

DETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS

Information collection on a graph

On the capacity of the general trapdoor channel with feedback

A Social Welfare Optimal Sequential Allocation Procedure

t 0 Xt sup X t p c p inf t 0

CSE 599d - Quantum Computing When Quantum Computers Fall Apart

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression

Improved Bounds on Bell Numbers and on Moments of Sums of Random Variables

Universal Finite Memory Coding of Binary Sequences

General Linear Model Introduction, Classes of Linear models and Estimation

Using the Divergence Information Criterion for the Determination of the Order of an Autoregressive Process

Collaborative Place Models Supplement 1

Elementary theory of L p spaces

arxiv: v2 [cs.it] 10 Aug 2009

Quantitative estimates of propagation of chaos for stochastic systems with W 1, kernels

Analysis of Multi-Hop Emergency Message Propagation in Vehicular Ad Hoc Networks

A Unified 2D Representation of Fuzzy Reasoning, CBR, and Experience Based Reasoning

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction

Introduction to Probability and Statistics

Multi-Operation Multi-Machine Scheduling

A Closed-Form Solution to the Minimum V 2

On the Chvatál-Complexity of Knapsack Problems

On a Markov Game with Incomplete Information

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized

Lilian Markenzon 1, Nair Maria Maia de Abreu 2* and Luciana Lee 3

Equivalence of Wilson actions

Recent Developments in Multilayer Perceptron Neural Networks

Optimism, Delay and (In)Efficiency in a Stochastic Model of Bargaining

LECTURE 7 NOTES. x n. d x if. E [g(x n )] E [g(x)]

Joint Property Estimation for Multiple RFID Tag Sets Using Snapshots of Variable Lengths

Analysis of M/M/n/K Queue with Multiple Priorities

MULTIVARIATE STATISTICAL PROCESS OF HOTELLING S T CONTROL CHARTS PROCEDURES WITH INDUSTRIAL APPLICATION

Optimal Design of Truss Structures Using a Neutrosophic Number Optimization Model under an Indeterminate Environment

A Model for Randomly Correlated Deposition

A CONCRETE EXAMPLE OF PRIME BEHAVIOR IN QUADRATIC FIELDS. 1. Abstract

Hidden Predictors: A Factor Analysis Primer

On Fractional Predictive PID Controller Design Method Emmanuel Edet*. Reza Katebi.**

Fisher Information in Flow Size Distribution Estimation

Bayesian Spatially Varying Coefficient Models in the Presence of Collinearity

3 Properties of Dedekind domains

c Copyright by Helen J. Elwood December, 2011

Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition

Domain Dynamics in a Ferroelastic Epilayer on a Paraelastic Substrate

Transcription:

Generalized otimal sub-attern assignment metric Abu Sajana Rahmathullah, Ángel F García-Fernández, Lennart Svensson arxiv:6005585v7 [cssy] 2 Se 208 Abstract This aer resents the generalized otimal subattern assignment GOSPA) metric on the sace of finite sets of targets Comared to the well-established otimal sub-attern assignment OSPA) metric, GOSPA is not normalised by the cardinality of the largest set and it enalizes cardinality errors differently, which enables us to exress it as an otimisation over assignments instead of ermutations An imortant consequence of this is that GOSPA allows us to enalize localization errors for detected targets and the errors due to missed and false targets, as indicated by traditional multile target tracking MTT) erformance measures, in a sound manner In addition, we extend the GOSPA metric to the sace of random finite sets, which is imortant to evaluate MTT algorithms via simulations in a rigorous way Index Terms Multile target tracking, metric, random finite sets, otimal sub-attern assignment metric I Introduction Multile target tracking MTT) algorithms sequentially estimate a set of targets, which aear, move and disaear from a scene, given noisy sensor observations [] In order to assess and comare the erformance of MTT algorithms, one needs to comute the similarity between the ground truth and the estimated set Traditionally, MTT erformance assessment has been based on intuitive concets such as localization error for roerly detected targets and costs for missed targets and false targets [2, Sec 36], [3] [6] These concets are aealing and ractical for radar oerators but the way they have been quantified to measure error has been ad-hoc With the advent of the random finite set RFS) framework for MTT [], it has been ossible to design and define the errors in a mathematically sound way, without ad-hoc mechanisms In this framework, at any given time, the ground truth is a set that contains the true target states and the estimate is a set that contains the estimated target states The error is then the distance between these two sets according to a metric, which satisfies the roerties of non-negativity, definiteness, symmetry and triangle inequality [7, Sec ], [8, Sec 62] The Hausdorff metric [9] and the Wasserstein metric [9] also referred to as otimal mass transfer metric in [0]) were the first metrics on the sace of finite sets of targets for MTT However, the former has been shown to be insensitive to cardinality mismatches and the latter lacks a consistent hysical interretation when the states have different cardinalities [0] In [0], the otimal sub-attern assignment OSPA) metric was roosed to address these issues OSPA otimally assigns AS Rahmathullah is with Zenuity AB, Gothenburg, Sweden email: abusajana@gmailcom) Á F García-Fernández is with the Deartment of Electrical Engineering and Automation, Aalto University, 0250 Esoo, Finland email: angelgarciafernandez@aaltofi) L Svensson is with the Deartment of Signals and Systems, Chalmers University of Technology, SE-42 96 Gothenburg, Sweden email: lennartsvensson@chalmersse) all targets in the smallest set to targets in the other set and comutes a localization cost based on this assignment The rest of the targets are accounted for by a cardinality mismatch enalty The OSPA metric has also been adated to handle sets of labelled targets [] We argue that it is more desirable to have a metric that accounts for the costs mentioned in traditional MTT erformance assessment methods localization error for roerly detected targets and false and missed targets) rather than localization error for the targets in the smallest set and cardinality mismatch, which is a mathematical concet that is more related to the RFS formulation of MTT roblem than the original MTT roblem itself For examle, OSPA does not encourage trackers to have as few false and missed targets as ossible In this aer, we roose such a metric: the generalized OSPA GOSPA) metric, which is able to enalize localization errors for roerly detected targets, missed targets and false targets In order to obtain this metric, we first generalize the unnormalized OSPA by including an additional arameter that enables us to select the cardinality mismatch cost from a range of values Then, we show that for a secific selection of this arameter, the GOSPA metric is a sum of localization errors for the roerly detected targets and a enalty for missed and false targets, as in traditional MTT erformance assessment algorithms Imortantly, this imlies that we now have a metric that satisfies the fundamental roerties of metrics and the intuitive, classical notions of how MTT algorithms should be evaluated [2, 36] After we derived the GOSPA metric, it has been used in a searate erformance evaluation [2], which illustrates the usefulness of GOSPA for analysing how the number of missed and false targets contribute to the total loss We also extend the metric to random sets of targets This extension has received less attention in the MTT literature desite its significance for erformance evaluation All the above-mentioned metrics assume that the ground truth and the estimates are known However, in the RFS framework, the ground truth is not known but is modelled as a random finite set [] Also, algorithm evaluation is usually erformed by averaging the error of the estimates for different measurements obtained by Monte Carlo simulation This imlies that the estimates are also RFSs, so it is imortant to have a metric that considers RFSs rather than finite sets In the literature, there is no formal treatment of this roblem to our knowledge In this aer, we fill this ga by showing that the mean GOSPA and root mean square GOSPA are metrics for RFSs of targets The outline of the rest of the aer is as follows In Section II, we resent the GOSPA and its most aroriate form for MTT In Section III, we extend it to RFS of targets In Section IV, we illustrate that the roosed choice of GOSPA rovides exected results comared to OSPA and unnormal-

2 ized OSPA Finally, conclusions are drawn in Section V II Generalized OSPA metric In this section, we resent the generalized OSPA GOSPA) metric to measure the distance between finite sets of targets Definition Let c > 0, 0 < α 2 and < Let dx,y) denote a metric for any x,y R N and let d c) x,y) = mindx,y),c) be its cut-off metric [8, Sec 62] Let Π n be the set of all ermutations of {,,n} for any n N and any element π Π n be a sequence π),,πn)) Let X = {x,,x } and Y = {y,,y Y } be finite subsets of R N For Y, the GOSPA metric is defined as X,Y) min d c) x i,y πi) ) c π Π Y α Y ) i= ) If > Y, X,Y) Y,X) It can be seen from the definition that the non-negativity, symmetry and definiteness roerties of a metric hold for GOSPA The roof of the triangle inequality is rovided in Aendix A We briefly discuss the roles of the arameters, c and α The role of the exonent in GOSPA is similar to that in OSPA [0] The larger the value of is, the more the outliers are enalized The arameter c in GOSPA determines the maximum allowable localization error and, along with α, it also determines the error due to cardinality mismatch By setting the arameter α =, we get the OSPA metric without normalization, which divides the metric by max, Y ) In Section II-A, we first discuss why the normalization in OSPA should be removed In Section II-B, we indicate the most suitable choice of α for evaluating MTT algorithms A On the removal of normalization In this section, we illustrate that the normalization in OSPA rovides counterinutitive results using the below examle Examle Let us say the ground truth isx = and we have estimates Y j = {y,,,y j } indexed with j N Intuitively, for increasing values of j, there is a higher number of false targets, so the distance from X to Y j should also increase However, the OSPA metric is c for any j That is, according to the OSPA metric, all these estimates are equally accurate, which is not the desired evaluation in MTT This undesirable roerty of the OSPA metric is due to the normalization If we remove this normalization from OSPA, the distance is j c, which increases with j This examle is a clear motivation as to why the normalization should be removed from the OSPA metric to evaluate MTT algorithms We refer to the OSPA metric without normalization, ie, GOSPA with α =, as unnormalized OSPA The OSPA metric without the normalization has been used in [2, Sec IV] to obtain minimum mean OSPA estimate Even though [2] makes use of the unnormalized OSPA as a cost function, it has not been reviously roved that it is a metric B Motivation for setting α = 2 in MTT In this section, we argue that the choice of α = 2 in GOSPA is the most aroriate one for MTT algorithm evaluations We show that with this choice, the distance metric can be broken down into localization errors for roerly detected targets, which are assigned to target estimates, and the error due to missed and false targets, which are left unassigned as there is no corresondence in the other set This is in accordance with classical erformance evaluation methods for MTT [2, Sec 36], [3] [6] For the sake of this discussion, we assume that X is the set of true targets and Y is the estimate, though the metric is of course symmetric Let us consider x X and y Y, such that all the oints in Y are far from x and all the oints in X are far from y In this case, the target x has been missed and the estimator has resented a false target y Following [6], we refer to these two targets as unassigned targets, even though they may or may not be associated to another target in the ermutation in ) If one of these unassigned targets is not associated to another target in the ermutation in ), it contributes with a cost c /α On the other hand, if two unassigned targets x and y are associated to each other in the ermutation in ), the cost contribution of the air is d c) x,y) = c The basic idea behind selecting α = 2 is that the cost for a single unassigned missed or false) target should be the same whether or not it is associated to another target in the ermutation in ) Therefore, given that a air of unassigned targets costs c and an unassigned target costs c /α, we argue that α = 2 is the most aroriate choice Due to the imortance of choosing α = 2, from this oint on, whenever we write GOSPA, we refer to GOSPA with α = 2, unless stated otherwise In GOSPA, any unassigned missed or false) target always costs c /2, and, as we will see next, GOSPA contains localization errors for roerly detected targets and a cost c /2 for unassigned targets In fact, GOSPA can be written in an alternative form, which further highlights the difference with OSPA and clarifies the resemblance with classical MTT evaluation methods To show this, we make the assignment/unassignment of targets exlicit by reformulating the GOSPA metric in terms of 2D assignment functions [2, Sec 65] [3, Cha 7] instead of ermutations An assignment set γ between the sets {,,} and {,, Y } is a set that has the following roerties: γ {,,} {,, Y }, i,j),i,j ) γ = j = j and i,j),i,j) γ = i = i, where the last two roerties ensure that every i and j gets at most one assignment Let Γ denote the set of all ossible assignment sets γ Then, we can formulate the following roosition Proosition The GOSPA metric, for α = 2, can be exressed as an otimisation over assignment sets d c,2) = X,Y) [ min γ Γ i,j) γ dx i,y j ) c 2 Y 2 γ ) )]

3 y 2 y d c) = c d c) = x x 2 a) Y a = {y,y 2 } y d c) = x b) Y b = {y } Figure : s are the oints in set X and s are the oints in Y The ermutation associations are shown using the dashed lines and the cut-off distances are shown on to of them In this illustration, < c Proof See Aendix B This roosition confirms that GOSPA enalizes unassigned targets and localization errors for roerly detected targets The roerly detected targets and their estimates are assigned according to the set γ so the first term reresents their localization errors Missed and false targets are left unassigned, as done in [6], and each of them is enalized by c /2 To understand this, we first note that γ is the number of roerly detected targets Hence, γ and Y γ reresent the number of missed and false targets, resectively, and the term c Y 2 γ )/2 therefore imlies that any missed or false target yields a cost c /2 It should also be noted that the notion of cut-off metric d c), ) is not needed in this reresentation and there is not a cardinality mismatch term Also, we remark that this reresentation cannot be used for OSPA or GOSPA with α 2 We illustrate the choice of α = 2 in GOSPA and comare it with OSPA in the following examle Examle 2 Consider the case where the ground truth is X = {x,x 2 } and there are two estimates Y a = {y,y 2 } and Y b = {y }, as illustrated in Figures a) and b) Targets x 2 and y 2 are very far away so that it is obvious that y 2 is not an estimate of x 2 Clearly, besides the localization error between x and y, the estimate Y a has missed target x 2 and reorted a false target y 2, whereas Y b has only missed target x 2 OSPA and unnormalized OSPA rovide the same distance to the ground c truth for both estimates, 2 and c, resectively As a result, according to these metrics, both estimates are equally accurate, which does not agree with intuition and classical MTT evaluation methods On the contrary, the GOSPA metric shows a desirable trend since d c,2) X,Y a ) = c is larger than d c,2) X,Y b ) = c 2 III Performance evaluation of MTT algorithms In the revious section, we studied metrics between finite sets of targets It was then imlicitly assumed that the ground truth and the estimates are deterministic However, MTT is often formulated as a Bayesian filtering roblem where the ground truth is an RFS and the estimates are sets, which deend deterministically on the observed data [] For erformance evaluation, in many cases, we average over several realizations of the data, so estimates are RFSs as well x 2 Therefore, evaluating the erformance of several algorithms is in fact a comarison between the RFS of the ground truth and the RFSs of the estimates As in the case of deterministic sets [8, 42], it is highly desirable to establish metrics for RFSs for erformance evaluation, which is the objective of this section We begin with a discussion on the metrics for vectors and random vectors case, and then show how we use these concets to extend the GOSPA metric to RFSs There are several metrics in the literature for random vectors x,y R N If we have a metric in R N, we have a metric on random vectors in R N by taking the exected value [4, Sec 22] Then, a natural choice is to comute the average Euclidean distance, E[ x y 2 ] x y 2 fx,y)dxdy, as a metric on random vectors, where x y 2 is the Euclidean distance and fx,y) is the joint density of x and y Another oular metric for random vectors is the root mean square error RMSE) metric, E[ x y 2 2 ] [4, Sec 22] An advantage with the RMSE, comared to the average Euclidean error, is that it is easier to use it to construct otimal estimators, since it is equivalent to minimizing the mean square error MSE); note that the MSE, E[ x y 2 2], and the squared Euclidean distance x y 2 2 are not metrics Similar to the Euclidean metric for vectors, one can use the GOSPA metric defined over finite sets to define metrics over RFSs Following the aroaches in the random vector case, root mean square GOSPA and mean GOSPA seem like natural extensions to RFSs Below, we establish a more general metric for RFS based on GOSPA for arbitrary α Proosition 2 For, <, c > 0 and 0 < α 2, [ ] E X,Y) is a metric for RFSs X and Y Proof See Aendix C For the GOSPA analogue of RMSE, one can set = = 2 and use Euclidean distance for d, ) Similar to the minimum MSE estimators in random vectors, [ one can ] equivalently use the mean square GOSPA E d c,2) 2 X,Y) 2 for obtaining sound RFS estimators based on metrics In the RFS case, there are estimators that are obtained by minimizing the mean square OSPA [5] [8] with = 2 or equivalently root mean square OSPA) and Euclidean distance as base metric One can extend the roof of the roosition to show that the root mean square OSPA is also a metric, which has not been reviously established in the literature IV Illustrations In this section, we show how GOSPA with α = 2 resents values that agree with the intuition and the guidelines of classical MTT erformance evaluation algorithms [4], while OSPA and unnormalized OSPA metrics do not We illustrate these results for several examles with varying number of missed and false targets in the estimates As mentioned in Section III, in a Bayesian setting, both the ground truth and estimates are RFSs and we want to determine which estimate is closest to the ground truth Rather than roviding a full MTT simulation, we assume that the ground truth and estimates are secific RFSs, which are easy

4 Table I: Table caturing the trends shown by mean metric and root mean square metric for varying number of missed and false targets GOSPA α = 2 OSPA Unnormali- -zed OSPA = = = = 2 # misses # false 0 2 0 2 0 455 605 8 360 60 8 862 004 2 672 832 979 3 652 807 20 042 54 264 0 4449 4605 48 823 890 959 0 227 502 8 255 588 8 420 502 8 507 588 8 3 570 65 8 639 702 8 0 704 745 8 737 765 8 0 455 004 6 360 832 3 262 004 6 879 832 3 3 2852 2607 24 430 404 385 0 8449 8205 80 2554 2540 2529 40 20 0-20 40 20 0-20 2 X oints Cut-off circle -20 0 20 40 60 9 a) 8 4 3 02 2 7-20 0 20 40 60 b) Figure 2: The samles of the ground truth X and estimate Y are illustrated in Figure 2a) and 2b) The estimate Y has 0 false targets with indexes from 3 to 2 and 2 roerly detected targets with indexes and 2 corresonding to the two true targets in Figure 2a) to visualize and are useful to illustrate the major asects of the roosed metrics We consider a ground truth X see Figure 2a)) which is a multi-bernoulli RFS [8, Sec 434] comosed of two indeendent Bernoulli RFSs, each with existence robability The robability densities of the individual RFSs are Gaussian densities N [ 6, 6] t,i) and N [0, 3] t,i) where I denotes the identity matrix and the notation v t denotes the transose 5 6 of the vector v Therefore, there are always two targets resent, which are distributed indeendently with their corresonding densities We consider scenarios with different estimates Y for this ground truth By varying Y, the number of missed and false targets in each scenario is chosen from {0,,2} and {0,,3,0}, resectively In all the cases, the estimate Y is also a multi-bernoulli RFS, that contains the Bernoulli sets deicted in Figure 2b) The comonents with indexes and 2 are Gaussian comonents with densities N [ 67, 5] t,i) and N [ 8, 29] t,i) and corresond to estimates of the targets in the ground truth in Figure 2a) The remaining comonents, with indexes 3 to 2, are false targets In scenarios where there is one missed target, we consider that comonent has existence robability but comonent 2 has 0 robability In scenarios where there are not any missed targets, we consider that both comonents have existence robability In the scenarios where the estimate reorts n false targets, the existence robability takes the value for the comonents3to n2 and the value 0 for the remaining false target comonents We comute the GOSPA and OSPA metrics for RFSs in Proosition 2 for the above scenarios and average the metric values over 000 Monte Carlo oints We set c = 8 and the value of = is chosen from {,2} The Euclidean metric is used as the base distance d, ) The estimation errors of these scenarios are tabulated in Table I The table has estimates with increasing number of missed targets when traversed across columns and increasing number of false targets when traversed across rows Let us first analyze the behavior of the different metrics for varying number of missed targets Intuitively, as one traverses across columns, the distance between the RFSs should increase with increasing number of missed targets This trend is observed with GOSPA and the OSPA metric for both = = and = = 2, but the unnormalized OSPA metric shows undesired behaviors when there are false targets in the scenarios the entries with red text in Table I) To exlain this, we look at the exression for the unnormalized OSPA in these scenarios If n and m are the number of false and missed targets, and d < c and d 2 < c are the cut-off distances for the roerly detected targets, then the unnormalized OSPA when

5 n 2 is d c,) = nc ) m = 2 d nc ) m = 2) d d 2 nc ) m = 0 For n =, d c,) takes the values c, d c ) and d d 2 c ) c,) resectively Clearly, for fixed n 2, d decreases with increasing number of missed targets m, which is not desirable Let us now analyze the behavior of the metrics for varying number of false targets As the number of false targets increases, the metric should increase [6] This trend is dislayed by GOSPA for α = 2 again On the other hand, the unnormalized OSPA shows a non-decreasing behavior the entries with blue text in Table I), which is not desirable The OSPA metric also shows counter intuitive behavior as we discussed in Examle in Section II-A When both targets are missed, the OSPA metric is constant for varying number of false targets Also, for the case with one missed target, OSPA and unnormalized OSPA have the same metric values when there are no false targets and when there is one false target This trend is similar to the behavior we observed in Examle 2 V Conclusions In this aer, we have resented the GOSPA metric It is a metric for sets of targets that enalizes localization errors for roerly detected targets and missed and false targets, in accordance with the classical MTT erformance evaluation methods In difference to the OSPA metric, the GOSPA metric therefore encourages trackers to have as few false and missed targets as ossible In addition, we have extended the GOSPA metric to the sace of random finite sets of targets This is imortant for erformance evaluation of MTT algorithms Aendix A Proof of the triangle inequality of GOSPA In the roof, an extension of Minkowski s inequality [9, 65] to sequences of different lengths by aending zeros to the shorter sequence is used Let us say we have two sequences a i ) m i= and b i) n i= such that m n We extend the sequence a i ) such that a i = 0 for i = m,,n Then, using Minkowski s inequality on this extended sequence we get that m ) n a i b i b i i= m i= a i ) i=m n i= b i ) for < We use this result several times in our roof We would like to rove the triangle inequality: 3) X,Y) X,Z) Y,Z) 4) for any three RFSs X, Y and Z The roof is dealt in three cases based on the values of, Y and Z Without loss of generality, we assume Y in all the three cases, since GOSPA is symmetric in X and Y Case : Y Z For any π Π Y, X,Y) d c) x i,y πi) ) c α Y ) i= Using the triangle inequality on the cut-off metric d c), ), we get that for any π Π Y and for any σ Π Z, [ ] X,Y) d c) x i,z σi) )d c) z σi),y πi) ) i= i= ) c α Y ) [ ] d c) x i,z σi) )d c) z σi),y πi) ) c α Y )2c α Z Y ) i= Y i= d c) z σi),y πi) ) [ ] = d c) x i,z σi) )d c) z σi),y πi) ) c Y α Z ) c α Z Y ) ) i= d c) z σi),y πi) ) d c) x i,z σi) ) c α Z ) i= Y d c) z σi),y πi) ) c α Z Y ) i= 5) 6) 7) 8) 9) To arrive at the last inequality, Minkowski s inequality in 3) is used Since π is a bijection, we can invert π to arrive at X,Y) d c) x i,z σi) ) c α Z ) i= Y d c) z π σi)),y i ) c α Z Y ) i= 0) The comosition π σ will be a ermutation on {,, Z } Lets denote this as τ So, for any τ,σ Π Z, X,Y) d c) x i,z σi) ) c α Z ) i=

6 Y d c) z τi),y i ) c α Z Y ) i=, ) which also holds for the σ and τ that minimizes the first and the second term in the right hand side This roves the triangle inequality for this case Case 2: Z Y As before, for any π Π Y and σ Π Z, [ ] X,Y) d c) x i,z σi) )d c) z σi),y πi) ) i= i= ) c α Y ) [ ] d c) x i,z σi) )d c) z σi),y πi) ) Z i= c α Y Z ) ) d c) z σi),y πi) ) c α Z ) d c) x i,z σi) ) c α Z ) i= Z d c) z σi),y πi) ) c α Y Z ) i= 2) 3) 4) From here, we can argue similar to the Case and show that the triangle inequality holds Case 3: Z Y X,Y) Z i= d c) x i,y πi) ) c α Y ) i= d c) x i,y πi) ) 2c α Z ) c α Y ) ) 5) To get the above inequality, for i = Z,,, we used the fact that d c) x i,y πi) ) c 2 c α when 0 α 2 Z [ ] X,Y) d c) x i,z σi) )d c) z σi),y πi) ) i= c α Y Z ) c α Y ) ) 6) From here, the arguments are similar to the ones in the last two cases Aendix B Proof of Proosition We roceed to rove Proosition Given X and Y, each ossible ermutation π Π Y in ) has a corresonding assignment set γ π = {i,j) : j = πi) anddx i,y j ) < c} such that we can write d c,2) X,Y) = min π Π Y i,j) γ π dx i,y j ) c γ π ) c 2 Y ) ) / 7) where we have written dx i,y j ) instead of d c) ) x i,y πi) as the distance between the assigned oints in γ π is smaller than c Also, γ π is the number of airs ) x i,y πi) for which d c), ) = c, and the second term comensates for the fact that these airs are not accounted for when we sum over i,j) γ π Rearranging terms we obtain d c,2) X,Y) = min π Π Y i,j) γ π dx i,y j ) c 2 Y 2 γ π )) / As the sace of assignment sets Γ is bigger than the set of assignment sets induced by ermutations π Π Y, we have d c,2) X,Y) min γ Γ i,j) γ dx i,y j ) c 2 Y 2 γ ) / 8) We have not yet finished the roof as we have obtained an inequality Let us considerγ to be the value of the assignment set that minimises the distance in Proosition First, / dx i,y j ) c 2 Y 2 γ ) i,j) γ = d c) x i,y j ) c 2 Y 2 γ ) i,j) γ / 9) due to the fact that otherwise we could construct a better assignment set γ = γ \ γ c where γ c = {i,j) γ : dx i,y j ) > c} That is, we know that γ does not contain airs i,j) for which d, ) > c On the other hand, if two airs are unassigned in the otimal assignment, their distance must be d, ) > c so d c), ) = c, as, otherwise, there would be an assignment that returns a lower value than the otimal one by assigning them We can now construct a corresonding ermutation π γ Π Y as follows: π γ i) = j if i,j) γ and the rest of the

7 comonents ofπ γ can be filled out arbitrarily as any selection does not change the value of the revious equation Then, / d c) x i,y j ) c 2 Y 2 γ ) i,j) γ / = d c) ) c x i,y πγ i) 2 Y ) i= d c,2) X,Y) Therefore, we now have roved that min γ Γ dx i,y j ) c 2 Y 2 γ ) i,j) γ d c,2) X,Y) 20) which together with 8) roves Proosition Aendix C Proof of the average GOSPA metric For RFS X with the multi object density function f ) and for a real valued function of RFS g ), using the set integral, the exectation of gx) [20, 77] is: ˆ E[gX)] = fx)gx)δx 2) ˆ = f{x,,x n })g{x,,x n })dx,,x n ) n! n=0 Similarly, we have that for random finite sets X, Y with joint density f, ) E[ X,Y) ] / = ˆ ˆ X,Y) fx,y)δxδy 22) Since X and Y are random finite sets, fx,y) is non-zero only when X and Y are finite, and in this case X,Y) is finite These conditions imly that E[ X,Y) ] < is satisfied Definiteness, non-negativity and symmetry roerties of 22) are observed directly from the definition Note that, for metrics in the robability sace, the definiteness between random variables is in the almost sure sense [4, Sec 22] The roof of the triangle inequality is sketched below In the roof, we use Minkowski s inequality for infinite sums and for integrals [9, 65] Using these Minkowski s inequalities, we can show that the inequality also extends to cases that have both infinite sums and integrals as it aears in the set integrals For real valued functionsψ n x :n ) and φ n x :n ) such that ψn x :n ) dx :n < and φn x :n ) dx :n < for n =,, and, ˆ n=0 ψ n x :n )φ n x :n ) dx :n ) ˆ n=0 ˆ n=0 ψ n x :n ) dx :n ) φ n x :n ) dx :n ) 23) The inequality in 23) can be roved by first using Minkowski s inequality for integrals on the LHS: ˆ ψ n x :n )φ n x :n ) dx :n ˆ ψ n x :n ) dx :n ˆ ) φ n x :n ) dx :n ) ) 24) And then using Minkowski s inequality for infinite sums on this, we get the RHS of 23) Now, we use the above results for the triangle inequality of 22) Let us consider RFS X, Y and Z with joint distribution fx,y,z): E[ X,Y) ] [ E X,Z) Z,Y)) ] 25) [ˆ ˆ ˆ = = X,Z) Z,Y) ) ] fx,y,z)δxδyδz 26) [ˆ ˆ ˆ X,Z)fX,Y,Z) ) Z,Y)fX,Y,Z) ] δxδy δz 27) If we exand the set integrals, they are of the form [ ˆ ˆ ˆ E[d c,α) X,Y) ] where i=0 j=0 k=0 f {x,,x i },{y,,y j },{z,,z k }) ) f 2 {x,,x i },{y,,y j },{z,,z k }) ] dx,,x i )dy,,y j )dz,,z k ), 28) f {x,,x i },{y,,y j },{z,,z k }) = {x,,x i },{z,,z k }) f{x,,x i },{y,,y j },{z,,z k }) i!j!k! and f 2 {x,,x i },{y,,y j },{z,,z k }) ) 29)

8 = {y,,y j },{z,,z k }) ) f{x,,x i },{y,,y j },{z,,z k }) i!j!k! 30) The multile integrals and sums in 28) can be considered as one major integral and sum Using Minkowski s inequality in 23) for infinite sums and integrals on 28), we get E[ X,Y) ] [ˆ ˆ ˆ [ˆ ˆ = [ˆ ˆ ˆ [ˆ ˆ X,Z) fx,y,z)δxδyδz ] ] Y,Z) fx,y,z)δxδyδz ] X,Z) fx,z)δxδz 3) ] Y,Z) fy,z)δyδz, 32) [6] M Baum, P Willett, and U D Hanebeck, Calculating some exact MMOSPA estimates for article distributions, in Proceedings of the 5th International Conference on Information Fusion, 202 [7] M Guerriero, L Svensson, D Svensson, and P Willett, Shooting two birds with two bullets: How to find minimum mean OSPA estimates, in Proceedings of the 3th International Conference on Information Fusion, 200 [8] Á F García-Fernández, M R Morelande, and J Grajal, Particle filter for extracting target label information when targets move in close roximity, in Proceedings of the 4th International Conference on Information Fusion, 20 [9] C S Kubrusly, Elements of oerator theory Sringer, 203 [20] I R Goodman, R P Mahler, and H T Nguyen, Mathematics of data fusion Sringer Science & Business Media, 203, vol 37 [2] YXia, K Granström, L Svensson, Á F García-Fernández, Performance evaluation of multi-bernoulli conjugate riors for multi-target filtering, acceted for ublication in Proceedings of the 20th International Conference on Information Fusion, July 207 which finishes the roof of the triangle inequality References [] R P Mahler, Statistical multisource-multitarget information fusion Artech House, 2007 [2] S Blackman and A House, Design and analysis of modern tracking systems, Boston, MA: Artech House, 999 [3] B E Fridling and O E Drummond, Performance evaluation methods for multile-target-tracking algorithms, in Proceedings of the SPIE Conference Signal and data rocessing of small targets, vol 48, 99, 37 383 [4] R L Rothrock and O E Drummond, Performance metrics for multilesensor multile-target tracking, in Proceedings of the SPIE conference Signal and data rocessing of small targets, vol 4048, 2000, 52 53 [5] S Mabbs, A erformance assessment environment for radar signal rocessing and tracking algorithms, in Proceedings of the IEEE Pacific Rim Conference on Comuters, Communications and Signal Processing, vol IEEE, 993, 9 2 [6] O E Drummond and B E Fridling, Ambiguities in evaluating erformance of multile target tracking algorithms, in Proceedings of the SPIE conference, 992, 326 337 [7] B Ristic, B-N Vo, and D Clark, Performance evaluation of multitarget tracking using the OSPA metric, in Proceedings of the 3th International Conference on Information Fusion, 200 [8] R P Mahler, Advances in statistical multisource-multitarget information fusion Artech House, 204 [9] J R Hoffman and R P Mahler, Multitarget miss distance via otimal assignment, IEEE Transactions on Systems, Man and Cybernetics Part A: Systems and Humans,, vol 34, no 3, 327 336, 2004 [0] D Schuhmacher, B-T Vo, and B-N Vo, A consistent metric for erformance evaluation of multi-object filters, IEEE Transactions on Signal Processing, vol 56, no 8, 3447 3457, 2008 [] B Ristic, B-N Vo, D Clark, and B-T Vo, A metric for erformance evaluation of multi-target tracking algorithms, IEEE Transactions on Signal Processing, vol 59, no 7, 3452 3457, 20 [2] J L Williams, An efficient, variational aroximation of the best fitting multi-bernoulli filter, IEEE Transactions on Signal Processing, vol 63, no, 258 273, 205 [3] A Schrijver, Combinatorial otimization: Polyhedra and efficiency Sringer Science & Business Media, 2003, vol 24 [4] S T Rachev, L Klebanov, S V Stoyanov, and F Fabozzi, The methods of distances in the theory of robability and statistics Sringer Science & Business Media, 203 [5] D F Crouse, P Willett, M Guerriero, and L Svensson, An aroximate minimum MOSPA estimator, in Proceedings of the IEEE International Conference on Acoustics, Seech and Signal Processing, 20