Target Tracking and Classification using Collaborative Sensor Networks

Similar documents
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 13: SEQUENTIAL DATA

Content.

L09. PARTICLE FILTERING. NA568 Mobile Robotics: Methods & Algorithms

Distributed estimation in sensor networks

Approximate Inference Part 1 of 2

Approximate Inference Part 1 of 2

Sequential Monte Carlo and Particle Filtering. Frank Wood Gatsby, November 2007

Sensor Fusion: Particle Filter

Computer Intensive Methods in Mathematical Statistics

Particle Filters. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Rao-Blackwellized Particle Filter for Multiple Target Tracking

Sensor Tasking and Control

Expectation propagation for signal detection in flat-fading channels

Lecture 2: From Linear Regression to Kalman Filter and Beyond

The Particle Filter. PD Dr. Rudolph Triebel Computer Vision Group. Machine Learning for Computer Vision

Approximate Inference

Multi-Robotic Systems

Bayesian Networks BY: MOHAMAD ALSABBAGH

AUTOMOTIVE ENVIRONMENT SENSORS

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

PROBABILISTIC REASONING OVER TIME

Artificial Intelligence

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo

A Probabilistic Framework for solving Inverse Problems. Lambros S. Katafygiotis, Ph.D.

Lecture 13 : Variational Inference: Mean Field Approximation

Kernel adaptive Sequential Monte Carlo

Auxiliary Particle Methods

Answers and expectations

SEQUENTIAL MONTE CARLO METHODS WITH APPLICATIONS TO COMMUNICATION CHANNELS. A Thesis SIRISH BODDIKURAPATI

Sequential Monte Carlo Methods for Bayesian Computation

MCMC and Gibbs Sampling. Kayhan Batmanghelich

Lecture 2: From Linear Regression to Kalman Filter and Beyond

CS 188: Artificial Intelligence Spring Announcements

CPSC 540: Machine Learning

EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER

Part 1: Expectation Propagation

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Notes on Machine Learning for and

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

Graphical Models and Kernel Methods

Outline. 1 Introduction. 2 Problem and solution. 3 Bayesian tracking model of group debris. 4 Simulation results. 5 Conclusions

Machine Learning for Data Science (CS4786) Lecture 24

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Bayesian Methods in Positioning Applications

Mobile Robot Localization

Bayesian Networks Inference with Probabilistic Graphical Models

Probabilistic Robotics

Chapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang

Adaptive Multi-Modal Sensing of General Concealed Targets

Probabilistic Fundamentals in Robotics

Computer Vision Group Prof. Daniel Cremers. 14. Sampling Methods

Dynamic Bandwidth Allocation for Target Tracking. Wireless Sensor Networks.

15-780: Grad AI Lecture 19: Graphical models, Monte Carlo methods. Geoff Gordon (this lecture) Tuomas Sandholm TAs Erik Zawadzki, Abe Othman

Adaptive Particle Filter for Nonparametric Estimation with Measurement Uncertainty in Wireless Sensor Networks

A Unifying Framework for Multi-Target Tracking and Existence

Particle Filtering a brief introductory tutorial. Frank Wood Gatsby, August 2007

Bayesian Methods for Machine Learning

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

Probabilistic Graphical Models

Bayesian Inference and MCMC

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo

CSEP 573: Artificial Intelligence

Lecture 8: Bayesian Estimation of Parameters in State Space Models

Dynamic System Identification using HDMR-Bayesian Technique

Hidden Markov Models. Vibhav Gogate The University of Texas at Dallas

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Probabilistic Graphical Models

State-Space Methods for Inferring Spike Trains from Calcium Imaging

Robust Monte Carlo Methods for Sequential Planning and Decision Making

Collaborative Target Detection in Wireless Sensor Networks with Reactive Mobility

Bearings-Only Tracking in Modified Polar Coordinate System : Initialization of the Particle Filter and Posterior Cramér-Rao Bound

2D Image Processing (Extended) Kalman and particle filter

Exercises Tutorial at ICASSP 2016 Learning Nonlinear Dynamical Models Using Particle Filters

An Brief Overview of Particle Filtering

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Computer Vision Group Prof. Daniel Cremers. 11. Sampling Methods

Chapter 16. Structured Probabilistic Models for Deep Learning

Introduction to Machine Learning

Results: MCMC Dancers, q=10, n=500

Announcements. Proposals graded

Partially Observable Markov Decision Processes (POMDPs)

RAO-BLACKWELLIZED PARTICLE FILTER FOR MARKOV MODULATED NONLINEARDYNAMIC SYSTEMS

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Probabilistic Reasoning in Deep Learning

Lecture 6: Bayesian Inference in SDE Models

An introduction to Sequential Monte Carlo

Sequential Monte Carlo Samplers for Applications in High Dimensions

CS242: Probabilistic Graphical Models Lecture 4B: Learning Tree-Structured and Directed Graphs

UWB Geolocation Techniques for IEEE a Personal Area Networks

13: Variational inference II

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Parameter Estimation. William H. Jefferys University of Texas at Austin Parameter Estimation 7/26/05 1

Sequential Bayesian Updating

Particle lter for mobile robot tracking and localisation

Implementation of Particle Filter-based Target Tracking

Computational Biology Lecture #3: Probability and Statistics. Bud Mishra Professor of Computer Science, Mathematics, & Cell Biology Sept

Temporal probability models. Chapter 15

Sequential Monte Carlo Methods (for DSGE Models)

Transcription:

Target Tracking and Classification using Collaborative Sensor Networks Xiaodong Wang Department of Electrical Engineering Columbia University

p.1/3 Talk Outline Background on distributed wireless sensor networks Single target tracking A Multiple target tracking and classification

Distributed Wireless Sensor Networks p.2/3

A sensor network is composed of a large number of sensor nodes, which are densely deployed either inside the phenomenon or very close to it. Random deployment Cooperative capabilities

Network Technology Computational Power Sensor Network Sensor Technology

MANET WSN High Mobility Peer to Peer model Low number of nodes Energy constraint not too critical No redundancy in data Connected to global network Commercial user end applications Low mobility (most cases!) Peer to peer or masterslave Large number of nodes Very severe energy constraint Some redundancy in data Local network Back end monitoring and data collection

Evolution of WSN Characteristic Generation1 Generation 2 Generation3 Timeline 1980-1990 2000-2003 2010 Size Large shoe box or bigger Pack of cards Weight Kilograms Grams Negligible Dust particle (MEMS) Node Architecture Separate sensing, processing and communication Integrated sensing, processing and communication Topology Pont to point, star Client server, peer to peer Integrated sensing, processing and communication Peer to peer Power supply lifetime Large batteries, hours to days AA batteries, days to weeks Solar, months to years Deployment Vehicle placed or air drop single sensors Hand emplaced Embedded, sprinkled left behind

p.3/3 Single Target Tracking via Collaborative Sensor Network 1600 1550 1500 Actual trajectory Estimated trajectory by OPT Estimated trajectory by PNN Estimated trajectory by NN Sensors 1450 1400 Y (m) 1350 1300 1250 1200 1150 1100 200 300 400 500 600 700 800 900 X (m)

p.4/3 Sensor network architecture Sensor nodes are scattered in a field. Nodes collect data and route data back to the sink via a multihop wireless medium. Each node has limited energy and only performs on-demand processing tasks upon a query. Otherwise it is a standby mode. Each node extracts relevant summary statistics from the raw data which are stored locally inside the node, and may be transmitted to other nodes upon request. To infer global information, the sensor network must facilitate efficient hierarchical information fusion.

p.5/3 Signal model Motion model: x t = {x t,y t,vt x,vy t } x t+ t = I 2 ti 2 0 I 2 x t + t2 2 I 2 ti 2 u t, Σ u = diag(σ 2 x, σ 2 y). Measurement model: Relative Distance Model: h(x t ) = K 10η log 10 ( (yt ȳ t ) 2 + (x t x t ) 2) Relative Angle Model: ( ) yt ȳ t h(x t ) = arctan x t x t

p.6/3 Cluttered noise model Measurements: Correct measurement if the target is detected Incorrect measurements from clutters. m f : number of false alarms. P(m f = k) = e λv (λv )m f m f!, k = 0, 1, 2,... V : volume of the observation area λ: number of false alarms per unit volume

p.7/3 Problem statement m t : number of measurements from target or false alarms at time t z t = {z m t } m t m=1 : set of measurements at time t Z t = {z j } t j=1 : set of measurements up to t Define events = {z m t is the target oriented measurements}, m = 1, 2,...,m t, θ m t θ 0 t = { none of the measurements at time t is the target oriented} P(θ m t m t ) = P D P G m t, m = 1,...,m t, 1 P D P G, m = 0. Goal: perform online estimation of p(x t Z t ) based on Z t at densely employed sensor nodes.

p.8/3 Sensor Selection Suppose at time t, sensor l is the leader node and holds the belief p(x t Z t ). For the new leader node at time t + 1, we select sensor s that achieves the maximum information gain of x t+1 conveyed by the new measurement z t+1,s I(x t+1,z t+1,s Z t ) = H(x t+1 Z t ) H(x t+1 Z t,z t+1,s ) In practice, the new observation z t+1,s is not available yet at time t.

p.9/3 Sensor Selection I(x t+1,z t+1,s Z t ) = H(x t+1 Z t ) H(x t+1 Z t,z t+1,s ) Need to estimate the measurement z t+1,s from the predicted belief at time t and use an expected likelihood function ˆp( z t+1,s x t+1 ) Then calculate the entropies using ˆp (x t+1 z t+1,s,z t ) = ˆp ( z t+1,s x t+1 )p(x t+1 Z t ).

p.10/3 SMC data fusion Let sensor l be the leader node at time t, which samples {X (j) t 1,w t 1} (j) N j=1 of p(x t 1 Z t 1 ). Let x t be the target state and z t = {z m t } m t m=1 be a set of validated measurements Denote X t = {X t 1,x t } and Z t = {Z t 1,z t }. We are interested in the on-line estimation of the new belief state p(x t Z t ). Use an auxiliary particle filter that resamples at the previous time step, based on some µ (j) t that characterize p(x t X (j) t 1).

p.11/3 SMC data fusion 1. For each i = 1,...,N, compute the point estimate of µ i t extended Kalman filter or one step prediction based on x i t 1. 2. For j = 1,...,N, do the following steps: (a) Sample an auxiliary integer variable from the set with probabilities proportional to by the g (i) t w (i) t 1 p(z t µ (i) t ); call the sampled index i (j). (b) Sample a value of the current state vector x (j) t xt 1 i(j). (c) Evaluate the corresponding weight. based on the

p.12/3 Entropy calculation Need to calculate the entropy of expected posterior distribution based on the discrete belief state {x (j) t,w (j) t } N j=1. Using the trial distribution q(x t+1 ) = p(x t+1 x (j) t,z t ), draw samples {X (j) t+1,w(j) t+1 )}N j=1 Then {X (j) t+1,w(j) t+1 )}N j=1 respect to p(x t+1 Z t ). with w(j) t+1 = w(j) t. is properly weighted with

Entropy calculation The expected posterior belief for sensor s can be represented by the discrete belief state {x (j) t+1, w(j) t+1,s }N j=1 w (j) t+1,s ˆp( z t+1,s x (j) t+1 ) = N Entropy calculation: = ˆp( z t+1,s x (j) t+1 )w(j) t+1 k=1 p (z t+1,s (x (k) t+1 ) x(j) t+1 )w(k) t+1 H(x t+1 Z t,z t+1,s ) = N j=1 w (j) t+1,s log w(j) t+1,s p.13/3

p.14/3 Kernel Smoothing Kernel density estimate based on samples: p(x t Z t ) = N w (j) t N ( x t m (j) t,h 2 V t ) j=1 with m (j) t = ax (j) t + (1 a) x t, a = 1 h 2. Instead of resampling from discrete samples, can resample from the continuous approximation, which can mitigate the degeneracy problem. Can collapse mixture components by replacing nearest pairs with a single averaged one reduce communication overhead between sensors.

p.15/3 Summary The initial leader node a does the following: 1. Draw initial samples {x (j) 0, w(j) 0 = 1} N j=1 information; 2. Update the belief state {x (j) 1, w(j) 1 }N j=1 measurement z 1 at leader node a; of the target from the prior based on the new 3. Compute the expected posterior belief state {x (j) 2, w(j) 2,i }N j=1 for each neighbor node i; 4. Compute the entropy of the expected posterior belief state {x (j) 2, w(j) 2,i }N j=1 for each neighbor node i, and determine the next best sensor, say b;

p.16/3 Summary 5 { Compute the kernel representation } of the belief state {m (j) 1, w(j) 1 }N j=1, h, V t and collapse the kernel smoothed belief state; 6 { Hand off the collapsed kernel representation { m (j) 1, w(j) 1 }K j=1, h, } V 1 to b. Node b takes the role of the leader, and it does the following: 1. Resample the belief state {x (j) 1, w(j) 1 = 1} N j=1 based on the the { collapsed kernel representation { m (j) 1, w(j) 1 }K j=1, h, } V 1 by the kernel-based reampling algorithm;

Summary 2 Update the belief state {x (j) 2, w(j) 2 = 1} N j=1 by the sensor fusion algorithm based on the new measurements z 2 and the belief state {x (j) 1, w(j) 1 = 1} N j=1 ; 3 Compute the expected posterior belief state {x (j) 3, w(j) 3,i }N j=1 for each neighbor node i with the weights w (j) 3,i ; 4 Compute the entropy on the expected posterior belief state {x (j) 3, w(j) 3,i }N j=1 for each neighbor node i, and determine the next best sensor, say c; 5 { Compute the kernel representation } of the belief state {m (j) 2, w(j) 2 }N j=1, h, V 2 and collapse it; 6 { Hand off the collapsed kernel representation { m (j) 2, w(j) 2 }K j=1, h, } V 2 to c; p.17/3

p.18/3 Simulation Results Localization 150 1540 1530 s6 s6 bias Source Sensor Posterior mean estimate 100 1520 1510 s5 s3 s5 50 Y (m) 1500 1490 s1 s4 s2 s4 0 150 200 250 300 350 400 150 X (m) 1480 s3 1470 100 1460 s2 50 1450 s1 1440 250 260 270 280 290 300 310 320 330 340 350 X (m) 0 150 200 250 300 350 400 X (m)

p.19/3 Simulation Results Tracking 1600 1550 1500 Actual trajectory Estimated trajectory by OPT Estimated trajectory by PNN Estimated trajectory by NN Sensors 1450 1400 Y (m) 1350 1300 1250 1200 1150 1100 200 300 400 500 600 700 800 900 X (m)

Multiple Target Tracking and Classification p.20/3

p.21/3 Multi-Target Tracking/Classification Densely scattered sensor nodes able to communicate with their neighbors and to process information. Multiple target tracking: sequential estimation the state of a possibly varying number of objects Classification: identification of those objects down to a given class motion model False detections: measurements arising from the clutter Unknown origin of the measurements complex data association problem

Multi-Target Tracking/Classification Sensor collaboration easier, localized tasks single target algorithm with (dis)appearing target 1600 1550 1500 Sensors Actual track 1 Actual track 2 Estimation, target 1 Selected sensors (1) Estimation, target 2 Selected sensors (2) 1450 1400 Y(m) 1350 1300 1250 1200 1150 1100 200 300 400 500 600 700 800 900 X(m) p.22/3

p.23/3 Leader-based Tracking Operations performed by a leader node [?] in multiple-target tracking and classification:

p.24/3 Target Dynamics The dynamic of the system is decomposed into x t,i = F γt,i (x t 1,i,u t,i ), i T t, γ t,i = γ t 1,i, i T t. T t, set of active targets x t,i, position and velocity of the i th target γ t,i, class of the i th target u t,i, independent white noise terms

Sensing Model The general model for the measurements is z m t = H t (x t,at (m),v t,m ), m {m a t (m ) 0}, z m t p c (z), m {m a t (m ) = 0}. z m t, m th measurement a t, data association vector H t, measurement function (e.g. power measurement) p c, clutter measurements distribution (e.g. uniform in the power range) The number of clutter measurements, m 0 t, typically arises from a poisson distribution. p.25/3

p.26/3 Measurements Likelihood Computation of the conditional distribution of the measurements enumeration of all possible data associations p(z t X t,r t ) = a t p(z t X t,r t,a t )p(a t r t,m t ). Having the dynamic of the system and the conditional distribution of the measurements we can now apply the SMC methodology to solve for the posterior distribution.

p.27/3 Single Target Tracking/Classification We compare several classes of dynamic models. 1 st approach: include the class in the state vector Problematic because of the fixed model for the class All particles eventually settle in one class can use a separate filter for each class We propose to modify the resampling scheme

Class-based Resampling Draw the number of particles for each class N γ according to { ˆP(γ Z 0:t )} γ Λ. If N γ < N Threshold, set N γ = N Threshold. Reduce the number of particles from the class with the most particles until γ Λ N γ = N. Draw N γ sample streams {X (J) 0:t } from {X (j) 0:t } j {j γ (j ) =γ} t weights {w (j) t } j {j γ (j ) t =γ}. with probability proportional to the Assign equal weight to each new sample within a class i.e. w (J) t = ˆP(γ (J) Z 0:t ) N. γ (J) p.28/3

p.29/3 Jump Markov Systems Varying number of targets can be dealt through Hypothesis testing Assuming a dynamic on the number of targets r t We consider a markovian evolution of r t π rt,r t 1 = p(r t r t 1 ) This model is referred as jump Markov Systems (JMS).

p.30/3 Choice of the Sampling Density Optimal density is intractable and its approximation are computationally heavy Main problem resides in the density for the number of targets we approximate only the optimal density for the number of targets p(r t X (j) t 1,r(j) t 1,Z t) p(z t X (j) t 1,r(j) t 1,r t)p(r t r (j) t 1 ) q(r t X (j) t 1,r(j) t 1,Z t) p(z t µ (j) t (r t ))p(r t r (j) t 1 ) µ (j) t,i = E[x t,i x (j) t 1,i,γ i]

p.31/3 Sensor Selection Scheme Information driven sensor selection s t, sensor chosen at time t s t+1 = arg max s ( Eps (Z t+1 Z 1:t ) [αυ utility (s) + (1 + α)υ cost (s)] ) E[Υ utility (s)] = I s (x t+1,1 ; z t+1 Z 1:t ) Approximation using Monte Carlo Integration from the available samples

p.32/3 Simulation Scenario We consider the tracking of two crossing targets from 2 different classes The first is initially tracked The second is initially unknown Probability of detection: 0.95 Average clutter measurements/time step: 1 Field: 700 500 m 2 covered by 300 randomly scattered sensor nodes Sensing range: [4m, 50m]

Tracking Results Actual and estimated trajectories with an unknown target. 1600 1550 1500 Sensors Actual track 1 Actual track 2 Estimation, target 1 Selected sensors (1) Estimation, target 2 Selected sensors (2) 1450 1400 Y(m) 1350 1300 1250 1200 1150 1100 200 300 400 500 600 700 800 900 X(m) p.33/3

p.34/3 Target Detection Results Probabilities of having two targets in the field: (left) first leader node; (right) newly generated second leader node. Probability of having two targets 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 100 200 300 400 500 600 Time Probability of having two targets 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Generation of the second leader node 100 200 300 400 500 600 Time

p.35/3 Classification Results Probabilities that the first target of the leader node is from the first class: (left) first leader node; (right) newly generated second leader node. 1 1 Probability of being in class 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 100 200 300 400 500 600 Time Probability of being in class 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 Generation of the second leader node 100 200 300 400 500 600 Time

p.36/3 Conclusions SMC provides a unique capability for supporting in-network distributed signal processing in non-linear and/or non-gaussian environments. Developed leader-based tracking multi-target tracking and classification algorithm based on SMC filtering. Developed a class-based resampling procedure to avoid the loss of plausible classification hypothesis during the early stage of tracking. Developed an SMC implementation of an optimal sensor selection scheme based on expected information gain.