TSRT14: Sensor Fusion Lecture 1

Similar documents
Sensor Fusion, 2014 Lecture 1: 1 Lectures

TSRT14: Sensor Fusion Lecture 9

TSRT14: Sensor Fusion Lecture 8

TSRT14: Sensor Fusion Lecture 6. Kalman Filter (KF) Le 6: Kalman filter (KF), approximations (EKF, UKF) Lecture 5: summary

EXAMINATION IN TSRT14 SENSOR FUSION

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 7: Optimal Smoothing

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Estimating Polynomial Structures from Radar Data

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Försättsblad till skriftlig tentamen vid Linköpings universitet

Estimation, Detection, and Identification CMU 18752

Introduction to Google Drive Objectives:

COS Lecture 16 Autonomous Robot Navigation

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Autonomous Mobile Robot Design

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Försättsblad till skriftlig tentamen vid Linköpings universitet

State Estimation and Motion Tracking for Spatially Diverse VLC Networks

STATISTICAL ORBIT DETERMINATION

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

The Kalman Filter ImPr Talk

Introduction to Unscented Kalman Filter

ROBUST CONSTRAINED ESTIMATION VIA UNSCENTED TRANSFORMATION. Pramod Vachhani a, Shankar Narasimhan b and Raghunathan Rengaswamy a 1

4 Derivations of the Discrete-Time Kalman Filter

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

EKF, UKF. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Solutions for examination in Sensor Fusion,

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

2D Image Processing. Bayes filter implementation: Kalman filter

CS 532: 3D Computer Vision 6 th Set of Notes

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Sensor Tasking and Control

A Theoretical Overview on Kalman Filtering

Lessons in Estimation Theory for Signal Processing, Communications, and Control

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

From Bayes to Extended Kalman Filter

Vehicle Motion Estimation Using an Infrared Camera an Industrial Paper

EKF/UKF Maneuvering Target Tracking using Coordinated Turn Models with Polar/Cartesian Velocity

Posterior Cramer-Rao Lower Bound for Mobile Tracking in Mixed Line-of-Sight/Non Line-of-Sight Conditions

E190Q Lecture 11 Autonomous Robot Navigation

2D Image Processing (Extended) Kalman and particle filter

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

A Comparison of the EKF, SPKF, and the Bayes Filter for Landmark-Based Localization

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

An introduction to particle filters

Computer Exercise 1 Estimation and Model Validation

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo

(W: 12:05-1:50, 50-N201)

Introduction to Model Order Reduction

F2E5216/TS1002 Adaptive Filtering and Change Detection. Course Organization. Lecture plan. The Books. Lecture 1

2D Image Processing. Bayes filter implementation: Kalman filter

Experiment 14 It s Snow Big Deal

Autonomous Navigation for Flying Robots

Optimization-Based Control

Nonlinear Optimization for Optimal Control Part 2. Pieter Abbeel UC Berkeley EECS. From linear to nonlinear Model-predictive control (MPC) POMDPs

Linear Dynamical Systems

Greg Welch and Gary Bishop. University of North Carolina at Chapel Hill Department of Computer Science.

A new unscented Kalman filter with higher order moment-matching

CDS 101/110a: Lecture 1.1 Introduction to Feedback & Control. CDS 101/110 Course Sequence

ENGR352 Problem Set 02

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

TSKS01 Digital Communication Lecture 1

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

SIMON FRASER UNIVERSITY School of Engineering Science

Initial Orbit Determination Using Stereoscopic Imaging and Gaussian Mixture Models

EECE Adaptive Control

OPTIMAL SENSOR-TARGET GEOMETRIES FOR DOPPLER-SHIFT TARGET LOCALIZATION. Ngoc Hung Nguyen and Kutluyıl Doğançay

Detection and Estimation Theory

Forecasting and data assimilation

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Lecture 4: Least Squares (LS) Estimation

Principles of the Global Positioning System Lecture 11

Unscented Transformation of Vehicle States in SLAM

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Introduction to Model Order Reduction

Automated Tuning of the Nonlinear Complementary Filter for an Attitude Heading Reference Observer

Intro to Linear & Nonlinear Optimization

Automatic Control II Computer exercise 3. LQG Design

The Unscented Particle Filter

Probabilistic Graphical Models

Summary of lecture 8. FIR Wiener filter: computed by solving a finite number of Wiener-Hopf equations, h(i)r yy (k i) = R sy (k); k = 0; : : : ; m

G. Hendeby Target Tracking: Lecture 5 (MHT) December 10, / 36

Least Squares and Kalman Filtering Questions: me,

Welcome to Physics 161 Elements of Physics Fall 2018, Sept 4. Wim Kloet

Distributed estimation in sensor networks

Bayesian Filters for Location Estimation and Tracking An Introduction

GIST 4302/5302: Spatial Analysis and Modeling

Nonlinear State Estimation! Particle, Sigma-Points Filters!

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

Solution Manual For Linear System Theory Hespanha

Lecture 3: Functions of Symmetric Matrices

Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London

PHY 123 Lab 9 Simple Harmonic Motion

Transcription:

TSRT14: Sensor Fusion Lecture 1 Course overview Estimation theory for linear models Gustaf Hendeby gustaf.hendeby@liu.se

Course Overview

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 2 / 23 Course Goals: study handbook The student should after the course have the ability to describe the most important methods and algorithms for sensor fusion, and be able to apply these to sensor network, navigation and target tracking applications. More specifically, after the course the student should have the ability to: ˆ Understand the fundamental principles in estimation and detection theory. ˆ Implement algorithms for parameter estimation in linear and nonlinear models. ˆ Implement algorithms for detection and estimation of the position of a target in a sensor network. ˆ Apply the Kalman filter to linear state space models with a multitude of sensors. ˆ Apply nonlinear filters (extended Kalman filter, unscented Kalman filter, particle filter) to nonlinear or non-gaussian state space models. ˆ Implement basic algorithms for simultaneous localization and mapping (SLAM). ˆ Describe and model the most common sensors used in sensor fusion applications. ˆ Implement the most common motion models in target tracking and navigation applications. ˆ Understand the interplay of the above in a few concrete real applications.

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 3 / 23 Lecture Plan Le Content Ch 1 Course overview. Estimation theory for linear models 1, 2 2 Estimation theory for nonlinear models 3 3 Cramér-Rao lower bound. Models for sensor network 4 4 Detection theory. Filter theory 5, 6 5 Modeling and motion models 12 14 6 Kalman filter. Kalman filter approximations (EKF, UKF) 7, 8 7 Point-mass filter and particle filter 9 8 Particle filter theory. Marginalized particle filter 9 9 Simultaneous localization and mapping (SLAM) 11 10 Sensors and filter validation. Industry case study 14, 15

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 4 / 23 Course Administration ˆ Course homepage: http://www.control.isy.liu.se/student/tsrt14/ ˆ Examination: ˆ Two labs including one lab report (make sure to sign up!) ˆ Exam with a mixture of theoretical and computer exercises ˆ Contact information: ˆ Examiner: Gustaf Hendeby (gustaf.hendeby@.liu.se) ˆ ˆ Teaching assistants: Isaac Skog (isaac.skog@liu.se) Lab session responsible: Parinaz Kasebzadeh (parinaz.kasebzadeh@liu.se)

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 5 / 23 Course Material ˆ Literature: Statistical Sensor Fusion. Fredrik Gustafsson. Studentlitteratur, 2018. ˆ Exercises: Statistical Sensor Fusion. Exercises. Christian Lundquist, Zoran Sjanic, and Fredrik Gustafsson. Studentlitteratur, 2015. ˆ Laboration 1 and 2 compendium (links on homepage) ˆ Matlab toolbox Signal and Systems Lab (link on the homepage) ˆ Toolbox manual ˆ Android app Sensor Fusion app available in Google Play Store and Matlab and Java real-time interface files.

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 6 / 23 Changes Since 2017 ˆ New edition of the textbook (3rd ed, 2018). The old version is still ok, but check the errata: http://www.control.isy.liu.se/student/tsrt14/errata.html

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 7 / 23 Le 1: linear estimation theory and sensor networks Whiteboard: ˆ Basic definitions and derivations. Slides: ˆ Summaries ˆ Examples ˆ Code examples ˆ Algorithms Goal: ˆ Localization of a device, a user, or an object in a sensor network!

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 8 / 23 A Simple Sensor Network Example Triangulation, as used by seamen for a long time. Assume two sensors that each measure bearing to target accurately, and range less accurately (or vice versa). How to fuse these measurements? x2 1.5 1 0.5 0 S1-0.5-0.5 0 0.5 1 1.5 2 2.5 x1 ˆ Use all four measurements with the same weight LS. Poor range information may destroy the estimate. ˆ Discard uncertain range information, gives a triangulation approach with algebraic solution. ˆ Weigh the information according to its precision WLS. S2

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 9 / 23 A Standard Example Laboration 1, but similar principles used in radio and underwater applications. Vehicle sends out regular acoustic pings. Time synchronized microphones detect ping arrival times. ˆ Localization: Can the vehicle be located if the microphone positions are known? If the ping times are known? If the ping times are unknown? ˆ Mapping: Can the microphone positions be estimated if the vehicle position is unknown? ˆ Simultaneous Localization and Mapping (SLAM): Can both the target and microphone positions be estimated?

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 10 / 23 Another Standard Example WiFi base stations in indoor environment. Receiver gets BSSID (identity) and RSS (received signal strength). ˆ Can the receiver be positioned if the WiFi base station positions are known? ˆ Can the receiver be positioned if the WiFi base station positions are unknown, but there is a map over RSS as a function of position (fingerprinting)?

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 11 / 23 A Hard Example A speaker sends out a 10 khz (120 db!) tone. Several vehicles pass by. A microphone network computes Doppler frequencies. ˆ Can the targets speeds be estimated? ˆ Can the targets positions be estimated?

Estimation Theory for Linear Models

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 13 / 23 Chapter 2 overview ˆ Linear model y k = H k x + e k ˆ WLS theory ˆ WLS off-line versus on-line forms ˆ The fusion formula (and safe fusion) ˆ Overview of algorithms with examples ˆ Tricks: transform to linear model and conditionally linear model

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 14 / 23 WLS Summary Linear model y k = H k x + e k, cov(e k ) = R k, k = 1,..., N, y = Hx + e, cov(e) = R. WLS loss function N V W LS (x) = (y k H k x) T R 1 (y k H k x) = (y Hx) T R 1 (y Hx). k=1 k=1 k=1 k Solution in batch form ( N ) 1 ˆx = Hk T R 1 k H N k Hk T R 1 k y k = (H T R 1 H) 1 H T R 1 y, ( N 1 P = Hk T R 1 k k) H = (H T R 1 H) 1. k=1

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 15 / 23 Sequential WLS The WLS estimate can be computed recursively in the space/time sequence y k. Suppose the estimate ˆx k 1 with covariance P k 1 are based on observations y 1:k 1, where we initiate with ˆx 0 and P 0 (a prior ). A new observation is fused using ˆx k = ˆx k 1 + P k 1 H T k (H kp k 1 H T k + R k) 1 (y k H k ˆx k 1 ), P k = P k 1 P k 1 H T k (H kp k 1 H T k + R k) 1 H k P k 1. Note that the fusion formula can be used alternatively. In fact, the derivation is based on the information fusion formula applying the matrix inversion lemma.

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 16 / 23 The Fusion Formula The fusion formula for two independent estimates is E(ˆx 1 ) = E(ˆx 2 ) = x cov(ˆx 1 ) = P 1 = cov(ˆx 2 ) = P 2 { ˆx = P (P 1 1 ˆx 1 + P 1 2 ˆx 2 ), P = (P 1 1 + P 1 2 ) 1. If the estimates are not independent, the safe fusion (or covariance intersection algorithm) provides a pessimistic lower bound accounting for worst case correlation.

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 17 / 23 A Simple Sensor Network Example, cont d (1/2) Code for triangulation in Signal and Systems Lab: p1 = [0; 0]; p2 = [2; 0]; x0 = [1; 1]; X1 = ndist (x0, 0.1*[1-0.8; -0.8, 1]) ; X2 = ndist (x0, 0.1*[1 0.8; 0.8, 1]) ; 1.5 1 x1 = rand (X1, 20) ; x2 = rand (X2, 20) ; plot2 (X1, X2, conf, 90,... legend, off, markersize, 18) ; hold on; plot2 ( empdist (x1), empdist (x2),... legend, off, linewidth, 2); axis ([ -0.5, 2.5, -0.5, 1.5]) ; x2 0.5 0 S1 S2 plot (p1 (1), p1 (2),.b,... p2 (1), p2 (2),.b,... markersize, 15) ; text (p1 (1), p1 (2), S1 ) text (p2 (1), p2 (2), S2 ) -0.5-0.5 0 0.5 1 1.5 2 2.5 x1

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 18 / 23 A Simple Sensor Network Example, cont d (2/2) X3 = fusion (X1,X2); % WLS X4 = 0.5* X1 + 0.5* X2; % LS plot2 (X4, X3, legend, off ) 1.5 1 hold on; axis ([ -0.5, 2.5, -0.5, 1.5]) ; x2 0.5 plot (p1 (1), p1 (2),.b,... p2 (1), p2 (2),.b,... markersize, 15) ; text (p1 (1), p1 (2), S1 ) text (p2 (1), p2 (2), S2 ) 0 S1 S2-0.5-0.5 0 0.5 1 1.5 2 2.5 x1

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 19 / 23 Safe Fusion Given two unbiased estimates ˆx 1, ˆx 2 with covariance matrices P 1 and P 2, respectively. Algorithm 1. SVD: P 1 = U 1 D 1 U T 1. 2. SVD: D 1/2 1 U T 1 P 2U 1 D 1/2 1 = U 2 D 2 U T 2. 3. Transformation matrix: T = U T 2 D 1/2 1 U T 1. 4. State transformation: ˆ x 1 = T ˆx 1 and ˆ x 2 = T ˆx 2. The covariances of these are cov(ˆ x 1 ) = I and cov(ˆ x 2 ) = D 2. 5. For each component i = 1, 2,..., n x, let ˆ x i = ˆ x i 1, D ii = 1 if D2 ii 1, ˆ x i = ˆ x i 2, D ii = D2 ii if D2 ii < 1. 6. Inverse state transformation: ˆx = T 1ˆ x, P = T 1 D 1 T T

ˆx 1 ˆx 2 ˆx 1 ˆx 2 ˆx 1 TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 20 / 23 Algorithm Illustration ˆx 2

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 21 / 23 A Simple Sensor Network Example, cont d X3 = fusion (X1, X2); % WLS X4 = fusion (X1, X3); % X1 used twice X5 = safefusion (X1, X3); 1.5 1 plot2 (X3, X4, X5); hold on; axis ([ -0.5, 2.5, -0.5, 1.5]) ; x2 0.5 plot (p1 (1), p1 (2),.b,... p2 (1), p2 (2),.b,... markersize, 15) ; text (p1 (1), p1 (2), S1 ) text (p2 (1), p2 (2), S2 ) 0 S1 S2 N([1;1],[0.018,0;0,0.018]) N([1;1],[0.0129,-0.00344;-0.00344,0.0129]) N([1;1],[0.018,-5.88e-18;5.88e-18,0.018]) -0.5-0.5 0 0.5 1 1.5 2 2.5 x1

Summary

TSRT14 Lecture 1 Gustaf Hendeby Spring 2018 23 / 23 Summary Lecture 1 ˆ Linear model on batch form: y = Hx + e, cov(e) = R. ˆ WLS minimizes the loss function ˆ WLS solution V W LS (x) = (y Hx) T R 1 (y Hx). ˆx = ( H T R 1 H ) 1 H T R 1 y, P = ( H T R 1 H ) 1. ˆ LS special case with R = I and gives larger P. ˆ The fusion formula for two independent estimates is E(ˆx 1 ) = E(ˆx 2 ) = x, cov(ˆx 1 ) = P 1, cov(ˆx 2 ) = P 2 ˆx = P ( P1 1 ˆx 1 + P2 1 ) ( ˆx 2, P = P 1 1 + P2 1 ) 1. ˆ If the estimates are not independent, P is larger than indicated.

Gustaf Hendeby hendeby@isy.liu.se www.liu.se