VoI for Learning and Inference! in Sensor Networks!

Similar documents
Distributed Detection of Binary Decisions with Collisions in a Large, Random Network

Threshold Considerations in Distributed Detection in a Network of Sensors.

Information-driven learning, distributed fusion, and planning

Distributed detection of topological changes in communication networks. Riccardo Lucchese, Damiano Varagnolo, Karl H. Johansson

CS281 Section 4: Factor Analysis and PCA

ACONSENSUS-BASEDDECENTRALIZEDEMFORAMIXTUREOFFACTORANALYZERS. Gene T. Whipps Emre Ertin Randolph L. Moses

Gaussian Mixture Models

Simultaneous and sequential detection of multiple interacting change points

Passive Sonar Detection Performance Prediction of a Moving Source in an Uncertain Environment

Distributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security

Communication constraints and latency in Networked Control Systems

Novel spectrum sensing schemes for Cognitive Radio Networks

Recovery of Low-Rank Plus Compressed Sparse Matrices with Application to Unveiling Traffic Anomalies

Introduction to Statistical Inference

Tuning the TCP Timeout Mechanism in Wireless Networks to Maximize Throughput via Stochastic Stopping Time Methods

Adaptive Multi-Modal Sensing of General Concealed Targets

arxiv:cs/ v1 [cs.ni] 27 Feb 2007

CFAR TARGET DETECTION IN TREE SCATTERING INTERFERENCE

APOCS: A RAPIDLY CONVERGENT SOURCE LOCALIZATION ALGORITHM FOR SENSOR NETWORKS. Doron Blatt and Alfred O. Hero, III

MATCHED SUBSPACE DETECTORS FOR DISCRIMINATION OF TARGETS FROM TREES IN SAR IMAGERY

Parametric Unsupervised Learning Expectation Maximization (EM) Lecture 20.a

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?

Lecture 5: Likelihood ratio tests, Neyman-Pearson detectors, ROC curves, and sufficient statistics. 1 Executive summary

Likelihood as a Method of Multi Sensor Data Fusion for Target Tracking

Computer Vision Group Prof. Daniel Cremers. 6. Mixture Models and Expectation-Maximization

Detection theory 101 ELEC-E5410 Signal Processing for Communications

Decentralized Sequential Change Detection Using Physical Layer Fusion

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo

If there exists a threshold k 0 such that. then we can take k = k 0 γ =0 and achieve a test of size α. c 2004 by Mark R. Bell,

Lecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan

Cooperative Spectrum Sensing for Cognitive Radios under Bandwidth Constraints

Dynamic Multipath Estimation by Sequential Monte Carlo Methods

Stability Analysis in a Cognitive Radio System with Cooperative Beamforming

arxiv: v1 [stat.co] 23 Apr 2018

Nonlinear Dimensionality Reduction. Jose A. Costa

Statistical Models and Algorithms for Real-Time Anomaly Detection Using Multi-Modal Data

Distributed Estimation and Detection for Smart Grid

4th IMPRS Astronomy Summer School Drawing Astrophysical Inferences from Data Sets

Cooperative Spectrum Prediction for Improved Efficiency of Cognitive Radio Networks

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Mobile Robot Localization

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

Chapter 2. Binary and M-ary Hypothesis Testing 2.1 Introduction (Levy 2.1)

WE consider the problem of detecting and localizing a material

Fully-distributed spectrum sensing: application to cognitive radio

Distributed estimation in sensor networks

Multiaccess Communication

Optimal Transmission Strategies for Dynamic Spectrum Access in Cognitive Radio Networks. Senhua Huang, Xin Liu, and Zhi Ding

QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS

Cooperatively Learning Footprints of Multiple Incumbent Transmitters by Using Cognitive Radio Networks

TARGET DETECTION WITH FUNCTION OF COVARIANCE MATRICES UNDER CLUTTER ENVIRONMENT

20 Unsupervised Learning and Principal Components Analysis (PCA)

Abnormal Activity Detection and Tracking Namrata Vaswani Dept. of Electrical and Computer Engineering Iowa State University

Dimension Reduction. David M. Blei. April 23, 2012

Machine Learning Techniques for Computer Vision

Clustering K-means. Machine Learning CSE546. Sham Kakade University of Washington. November 15, Review: PCA Start: unsupervised learning

بسم الله الرحمن الرحيم

Wireless Internet Exercises

Semi-Supervised Learning by Multi-Manifold Separation

Efficient Sensor Network Planning Method. Using Approximate Potential Game

MULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING. Kaitlyn Beaudet and Douglas Cochran

Lecture 9. Time series prediction

Ensemble Methods. NLP ML Web! Fall 2013! Andrew Rosenberg! TA/Grader: David Guy Brizan

Recent Advances in Bayesian Inference Techniques

Thrust #3 Active Information Exploitation for Resource Management. Value of Information Sharing In Networked Systems. Doug Cochran

Cooperative HARQ with Poisson Interference and Opportunistic Routing

Opportunistic Spectrum Access for Energy-Constrained Cognitive Radios

Clustering VS Classification

Data-Efficient Quickest Change Detection

Lecture 6: Gaussian Mixture Models (GMM)

Decentralized Sequential Hypothesis Testing. Change Detection

Machine Learning. CUNY Graduate Center, Spring Lectures 11-12: Unsupervised Learning 1. Professor Liang Huang.

RECENTLY, wireless sensor networks have been the object

Analysis of DualCUSUM: a Distributed Energy Efficient Algorithm for Change Detection

Lecture 7: Con3nuous Latent Variable Models

Markov decision processes with threshold-based piecewise-linear optimal policies

requests/sec. The total channel load is requests/sec. Using slot as the time unit, the total channel load is 50 ( ) = 1

Bayesian Decision Theory

Bayesian Networks Inference with Probabilistic Graphical Models

Data Analysis and Manifold Learning Lecture 6: Probabilistic PCA and Factor Analysis

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Distributed Inference and Learning with Byzantine Data

L11: Pattern recognition principles

Distributed Particle Filters: Stability Results and Graph-based Compression of Weighted Particle Clouds

Collaborative Target Detection in Wireless Sensor Networks with Reactive Mobility

Accuracy and Decision Time for Decentralized Implementations of the Sequential Probability Ratio Test

Energy-Efficient Noncoherent Signal Detection for Networked Sensors Using Ordered Transmissions

ABSTRACT INTRODUCTION

Lecture Outline. Target Tracking: Lecture 3 Maneuvering Target Tracking Issues. Maneuver Illustration. Maneuver Illustration. Maneuver Detection

Random Access Protocols ALOHA

How to Deal with Multiple-Targets in Speaker Identification Systems?

Clustering and Gaussian Mixture Models

Introduction to Detection Theory

Lecture 3: Pattern Classification

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Mixtures of Gaussians continued

Distributed Constrained Recursive Nonlinear Least-Squares Estimation: Algorithms and Asymptotics

Clustering K-means. Clustering images. Machine Learning CSE546 Carlos Guestrin University of Washington. November 4, 2014.

Lecture 8: Signal Detection and Noise Assumption

Detection Performance and Energy Efficiency of Sequential Detection in a Sensor Network

Transcription:

ARO MURI on Value-centered Information Theory for Adaptive Learning, Inference, Tracking, and Exploitation VoI for Learning and Inference in Sensor Networks Gene Whipps 1,2, Emre Ertin 2, Randy Moses 2 1 U.S. Army Research Laboratory, Networked Sensing & Fusion Branch 2 The Ohio State University, Dept. of Electrical & Computer Engineering

Two axes of progress Progress 1: Distributed decision fusion: Propose a distributed detection framework to analyze large, random sensor networks [Year2] Analyze performance versus network constraints and under imperfect communications Progress 2 [new]: Distributed/decentralized learning: Learn low-dimensional manifold structure in high dimensions using a mixture of factor analyzers framework. Apply distributed EM from locally sufficient statistics Derive decentralized EM to approximate distributed learning via consensus averaging in network neighborhoods

Progress 1: Distributed decision fusion in single-hop, random sensor networks Performance analysis of a large sensor network Distributed detection of a rare event Each sensor observes a source and computes local feature vector Local decisions sent to fusion center to estimate state H Goal: Quantify fusion detection performance as a function of sensor network density Challenge: Increasing sensor density requires decreasing per node data rate to meet network constraints Desensitize local sensor detections (increase τ) as network density increases Local thresholds τ Counting Rule Fusion threshold κ

Progress 1: Distributed decision fusion in single-hop, random sensor networks P M (=1-P D ) vs Sensor Density, Fixed H 0 communication traffic As sensor node density increases: Constrain the rate of sensor reporting to maintain the same average network communication load under H 0 Quantify performance under H 1 Prob. of missed detection P FA3 P P FA2 FA1 Sensor Density Perfect communications to fusion center.

Progress 1: Distributed decision fusion with communications collisions Imperfect communications: Slotted-ALOHA... Sensing Period k Comms Period k Sensing Period k+1 Comms Period k+1 1 2 M 1 2 M... MAC Properties Poisson(λ) nodes sharing M slots Fusion delay is proportional to M Only sensors with local detections transmit Sensor nodes select a slot according to a discrete uniform distribution No collision avoidance mechanism is used No retransmissions are made if collisions occur At Fusion Center M 0 slots have no transmissions M 1 slots have (single) detection transmissions M C slots have collisions ( 1 detection) For assumed target parameters: Optimal Fusion Rule: α M 1 + β M C S-ALOHA, λh avg. nodes transmit Only sensors with detections transmit

Progress 1: Distributed decision fusion with communications collisions ROC, M=5 slots, λp0=0.5 Prob. of detection dotted: perfect comms solid: slotted ALOHA λp1=5 λp1=7 λp1=9 Pd vs M slots, Pfa=10-5, λp0=0.5 Prob. of false alarm Prob. of detection λp1=5 λp1=7 λp1=9 M (slots)

Progress 2: Distributed learning of a mixture of factor analyzers Learning low-dimensional manifolds: Distributed and decentralized unsupervised learning of low dimensional manifold from high dimensional observations Each sensor observes its environment and computes local sufficient statistics Local statistics sent to fusion center (distributed) or a neighborhood (decentralized) to estimate global model Goal: Design unsupervised learning methods and study convergence properties Data lives on a lower dimensional subspace From: Saul, L. and Roweis, S., Think Globally, Fit Locally: Unsupervised Learning of Nonlinear Manifold, Technical Report MS CIS-02-18, University of Pennsylvania,

Progress 2: Distributed learning of MFAs VOI Approach: Gaussian Mixture Model (GMM) representation Locally linear approximation: Mixture of Factor Analyzers (MFA) Roweis, S., et al., 2001; Verbeek 2006 Fusion of GMM; Local sensors see only a subset of data modes Sensor estimate MFA means and low-rank covariances Consensus is used to align local estimates Express needed MFA representation as functions of means Modes are initialized by distributed or decentralized k-means Network information flow Distributed Each node has a comm path to a fusion center Decentralized Nodes are connected; no fusion center

Progress 2: Distributed learning of a mixture of factor analyzers Synthetic data example Unconstrained parameter estimates 21 Sensor nodes, 6D data; 1D NL manifold 1-σ lines centered on local means This is a depiction of the manifold in 2, 3 of the dimensions. Increases data likelihood Distributed data log-likelihood distributed decentralized Iterations on global estimates (decentralized: consensus iterations for each global iterate) Decentralized

Future and ongoing focus areas and collaborations Study VoI performance tradeoffs with respect to: network density, data complexity (number of mixtures), and network structure Study convergence rates of data representations with respect to network connectivity and data representation complexity. Investigate model order learning and adaptation in decentralized setting Investigate extensions to multiple sensor modalities. Compare alternative strategies for target manifold learning (Hero + Ertin)