Distributed Data Fusion with Kalman Filters. Simon Julier Computer Science Department University College London
|
|
- Rodger Matthews
- 5 years ago
- Views:
Transcription
1 Distributed Data Fusion with Kalman Filters Simon Julier Computer Science Department University College London
2 Structure of Talk Motivation Kalman Filters Double Counting Optimal Distributed Data Fusion Suboptimal Distributed Data Fusion Probabilistic Interpretation of WGMs Summary 2
3 Motivation Motivation Kalman Filters Double Counting Optimal Distributed Data Fusion Suboptimal Distributed Data Fusion Probabilistic Interpretation of WGMs Summary 3
4 Wilderness Search and Rescue 4
5 Wilderness Search and Rescue The priority is search which is fast and safe You have to find somebody before you can rescue them Time is often of the essence Safety of searchers must be ensured UAVs are an ideal tool to use: Rapidly collect data from wide area Can go where it s dangerous Automated platforms can operate even faster 5
6 UAV Search Task Fly around the environment Control based on prior distribution of target location Detect potentially interesting objects Localise potentially interesting objects Identify potentially interesting objects 6
7 Autonomous Platforms 7
8 Raw Sensor Data Returned 8
9 Target Localisation Estimate the state of a target in an area of interest using multiple UAVs UCL March
10 Tracking with Multiple UAVs 10
11 Goals of Lecture What causes the algorithms to fail so badly? How can we solve these problems optimally, and what constraints do they place on the solution? How can we solve the problem suboptimally, and what constraints do they pose on the problem? 11
12 Kalman Filters Motivation Kalman Filters Double Counting Optimal Distributed Data Fusion Suboptimal Distributed Data Fusion Probabilistic Interpretation of WGMs Summary 12
13 System Description Let the state of the system at time step i be given by the state space vector Both the process and observation models are linear and are of the form: 13
14 Structure of the State Estimate The estimate consists of the tuple: where: is the mean vector; really a single numerical estimate a bit like MAP is the covariance ; really measure of mean squared error 14
15 Valid Estimates We need to have a criteria which says whether our system works or not Since we only have a state vector and a covariance matrix, our criteria of success is that our state is covariance consistent 15
16 Covariance Consistency We say that an estimate is conservative if it overestimates the actual mean squared error in the estimate, where 16
17 Matched 17
18 Conservative 18
19 Consistent 19
20 Inconsistent 20
21 Biasedness Authors often stipulate that the error needs to be zero-mean However, this is overly restrictive Any system with modelling errors cannot be consistent under this definition Since all models of the real world are wrong, no filter can be consistent 21
22 Biasedness Suppose Then 22
23 Biasedness Therefore, for the estimate to be consistent, 23
24 Matched Biased 24
25 Non-Gaussian Noise Models The definition similarly applies if the distribution is non-gaussian In this case, we just compute the moments of the distribution 25
26 Non-Gaussian Noise Models 26
27 Comparison with Entropy We could argue that instead of covariance consistency, we could use a measure like entropy However: To compute entropy we have to assume a distribution (e.g., Gaussian) Even with this assumption, it won t tell us if our estimate is too small 27
28 Entropy-Matched Distributions 28
29 First Iteration of the Kalman Filter Cycle Initialise Predict Update Observation 29
30 Prediction Step Under the assumption that the errors are small, linearised prediction equations are used 30
31 Kalman Filter Update Step We (arbitrarily) decide a linear update rule of the form where is the innovation vector
32 Kalman Filter Update Step The weight matrix is chosen to minimise the trace of the updated covariance matrix It can be shown that this is and this gives a covariance update equation of the form
33 Debiased Coordinate Conversions A sensor measures the range r and bearing to a target The aim is to estimate the (x, y) coordinates of the target The measurement model is, of course: 33
34 Debiased Coordinate Conversions 34
35 Debiased Coordinate Conversions in Action 35
36 [Mapping Example]
37 Extended Kalman Filter A common assumption is that the errors are small Therefore, the first two moments are approximated as Various kinds of analytical and numerical moment approximations are widely used as well
38 Kalman Filters Motivation Kalman Filters Double Counting Optimal Distributed Data Fusion Suboptimal Distributed Data Fusion Probabilistic Interpretation of WGMs Summary 38
39 Distributed Fusion Kalman Filter Cycle Initialise Predict Update Local Observation Broadcast Estimate Update Remote Estimate Observation Other Nodes Additional steps due to distribution 39
40 Basic Idea of DDF Each platform maintains its own estimate of the target state, Each node runs a Kalman filter locally and fuses locally taken measurements The update is distributed to other nodes which fuse with it 40
41 However, There Is A Slight Complication The state information stored in each node is not independent of the information in other nodes Common process noise Occurs whether or not nodes have exchanged information Common measurement history Occurs when nodes exchange information Assuming that state estimates are independent of one another is bad However, often used in so-called weak coupling of, say, GPS and INS systems 41
42 Dependent Information and Information Sets Each node collects its own set of data, which is independent of the other node
43 Fusion of Information Sets Estimates (information sets) exchanged between nodes
44 Fusion of Independent Information Sets
45 When Do Independent Sets Arise? Independent sets arise when the information set to each node is conditionally independent This can only happen if you can guarantee: The target is stationary The poses of the UAVs are known perfectly The same observation information is only ever used once
46 Multiple Platform Fusion
47 Dependent Information Sets Both sets now contain common information; not conditionally independent
48 Fusion of Dependent Information Sets New information Common information New information
49 Assuming Conditional Independence Double counted term
50 Double Counting in State Space Form Within a Kalman filter, the dependency is through the values of the cross correlations These can be evaluated by considering the state of the entire network, including the joint state of all platforms, all objects being tracked, etc. 50
51 Double Counting in State Space Form The full covariance structure of this estimate is
52 Double Counting in State Space Form However, if we only maintain the marginals for each platform separately,
53 Double Counting in State Space Form The error in the approximate covariance matrix is Therefore, we are using an inconsistent approximation of our network and our system will fail 53
54 Assuming State Estimates Are Independent
55 Optimal Distributed Data Fusion Motivation Kalman Filters Double Counting Optimal Distributed Data Fusion Suboptimal Distributed Data Fusion Probabilistic Interpretation of WGMs Summary 55
56 The Right Way to Solve the Problem Chong and Mori showed that this can be implemented with a modified form of Bayes Rule, Cancel out common information This cancels out the common information between nodes This can only be computed locally with special network topologies
57 Approach 1: Distribute Observations Broadcast all observations to all nodes 57
58 Pros and Cons Advantages: Each node has optimal estimate for all time Distribution provides no additional complexity to fusion algorithm Actually used in practice Disadvantages: Requires all nodes to have the same communication and computational abilities Requires extremely large bandwidth Introduces implicit assumption that all nodes have exactly the same estimate 58
59 Approach 2: Fully-Connected Network Broadcast all updated state estimates to all nodes 59
60 Fully-Connected Networks The easiest way to implement a fully connected network is to use the inverse covariance (or information) form of the Kalman Filter The state space is replaced by the information variables 60
61 Updating in Information Form Using information form, the update simplifies to where the information from the observations is 61
62 Distributed Information Updates Since the information from the observations is independent of the state, i n and I n are independent of previous state estimates and can be safely distributed The update rule simply becomes 62
63 Fully-Connected Network Advantages: Each node has optimal estimate for all time Broadcasting the observation information variables potentially saves bandwidth Disadvantages: Requires all nodes to have the same communication and computational abilities Still requires O(N 2 ) communication links Introduces explicit assumption that all nodes have exactly the same estimate (important if linearising e.g., with an EKF) 63
64 Approach 3: Hierarchical Network Network has master and slave nodes Slaves fuse data locally Estimates sent to master which fuses them together Revised estimate broadcast back to slaves 64
65 Fusion in the Slave The slave updates using the information Kalman filter equations: 65
66 Fusion in the Master The master updates by summing the information from all the slaves To compensate for the prediction which was sent out, the master must subtract out common information, 66
67 Hierarchical Network Advantages: Each node has optimal estimate for all time The number of communication links is O(N) Disadvantages: Additional latency One node is privileged; failure of that node causes the whole network to fail 67
68 Approach 4: Channel Filters Constrain the network to be a tree Single path between any pair of nodes Use channel filters to subtract off common information 68
69 Estimating Common Information Consider a link between a pair of nodes i and j The channel filter maintains common information across the link It has its own information estimate, 69
70 Updating Local Nodes The Channel Filter is a regular Kalman Filter but works with the information exchanged between i and j rather than the observation data directly First, let the update at filter i using the local sensor observations be written as 70
71 Fusing With Nearby Nodes The updated estimate is given by summing all the independent information from a node s neighbours, 71
72 Updating the Channel Filters The channel filter update is given by recursively updating with the difference in information variables from the two nodes, 72
73 Channel Filters in Action 73
74 Advantages and Disadvantages Advantages: The number of communication links is O(N) Optimal in a time-delayed sense Disadvantages: Estimates at all nodes differ Single path of communication; no redundancy If the network is reconfigured, the channel filters have to be recalculated from scratch Global time synchronisation 74
75 Hybrid Architectures Channel filters can be mixed-and-matched with other local topologies like observation distribution or master-slave 75
76 Review of Techniques So Far It is possible to develop optimal algorithms for distributed data fusion using local message passing only However, these techniques rely on special network topologies: Fully connected Tree-connected In general, preserving these topologies can be difficult and undesirable 76
77 Adhoc Network Arbitrary network with loops and cycles Complete flexibility and redundancy 77
78 Distributed Data Fusion in Adhoc Networks However, it has been shown that no local data fusion scheme can be used to develop consistent, optimal estimates in this situation Therefore, it appears that DDF is strongly limited to the case of very particular data fusion architecture Alternative approach: can we develop mathematically rigorous suboptimal solutions? 78
79 Suboptimal Distributed Data Fusion Motivation Kalman Filters Double Counting Optimal Distributed Data Fusion Suboptimal Distributed Data Fusion Probabilistic Interpretation of WGMs Summary 79
80 Double Counting in State Space Form Recall again that the problem is that we want to know the full joint covariance However, we only know the marginalised form 80
81 Double Counting in State Space Form From knowledge of the marginals alone, it is not possible to reconstruct the full joint covariance matrix However, because the joint covariance matrix must be positive semidefinite, we know there are constraints on what the cross correlations look like Therefore, we can exploit these constraints to develop update rules which are consistent for any feasible cross correlation 81
82 The Kalman Filter with Correlated Noise First consider the case that the observation noise is not independent of the filter state, It can be shown that the expectations become
83 Properties of Updated Covariances 83
84 Applying the Results to The update which generates a family of ellipses which circumscribe the intersection region is given by This is the same as a Kalman filter update, but with 84
85 Covariance Intersection 85
86 Choosing The free parameter is used to trade-off between the prediction and the observation It should be chosen to minimise some measure of uncertainty in the estimate Trace Determinant (better) The optimisation is convex and so many simple solver algorithms can be used Some closed form solutions have been developed as well 86
87 Covariance Intersection In Action 87
88 Probabilistic Interpretation of WGMs Motivation Kalman Filters Double Counting Optimal Distributed Data Fusion Suboptimal Distributed Data Fusion Probabilistic Interpretation of WGMs Summary 88
89 Probabilistic Interpretation Great, so this works with means and covariances - but is it actually doing something valid from a probability distribution point of view? It turns out, that a generalisation of CI is equivalent to computing the weighted geometric mean,
90 WGM Does Not Double Count Single counted term
91 Structure of the Fusion Rule The fusion rule has the form where Function of new information Common information single-counted
92 Information Losses and Gains Therefore, we now need to ask what is the effect of We can assess this in several ways: By observation Pointwise bounds Information measures Surprisingly hard; still a work in progress
93 Example Distributions
94 Effect of
95 Effect of
96 Effect of
97 Effect of
98 Effect of
99 Effect of
100 Pointwise Bounds It is possible to establish pointwise bounds which apply at each point in the distribution Although pointwise bounds play no special role in Bayesian statistics, they provide some insight into the behaviour of the fusion rule
101 Bounds for the Unnormalised Distribution Let This is always squeezed between the distributions,
102 Illustration of the Unnormalised Bound
103 Lower Bound Consider the distribution where The WGM obeys the lower bound
104 Illustration of the Lower Bound
105 Interpreting the Lower Bound The minimum value of a distribution plays no special role in Bayesian statistics However, the bound from below Avoids degenerate cases The support has to contain the intersection of the supports of the prior distributions Lower bounds on distributions often play a role in practical filtering algorithms Truncate distributions or modes in MHT if the probability is too small
106 Upper Inequality There can exist an x such that The fact that the distribution can exceed the maximum suggests that fusion can occur The distribution becomes more concentrated
107 Illustration of the Upper Inequality
108 Updated Distribution
109 Summary Motivation Kalman Filters Double Counting Optimal Distributed Data Fusion Suboptimal Distributed Data Fusion Probabilistic Interpretation of WGMs Summary 109
110 Summary Distributed data fusion is important for many applications However, estimates are not conditionally independent Optimal solutions can be used in just limited circumstances Suboptimal algorithms can be used more widely The KF is more than Bayes with Gaussians! 110
SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada
SLAM Techniques and Algorithms Jack Collier Defence Research and Development Canada Recherche et développement pour la défense Canada Canada Goals What will we learn Gain an appreciation for what SLAM
More informationHorizontal Integration based upon: Decentralized Data Fusion (DDF), NetCentric Architecture (NCA), and Analysis Collaboration Tools (ACT)
Horizontal Integration based upon: Decentralized Data Fusion (DDF), NetCentric Architecture (NCA), and Analysis Collaboration Tools (ACT) S.B. Gardner Naval Research Laboratory Washington, DC 20375 Horizontal
More informationIntroduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p.
Preface p. xiii Acknowledgment p. xix Introduction p. 1 Fundamental Problems p. 2 Core of Fundamental Theory and General Mathematical Ideas p. 3 Classical Statistical Decision p. 4 Bayes Decision p. 5
More informationAutonomous Mobile Robot Design
Autonomous Mobile Robot Design Topic: Extended Kalman Filter Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, J. Sturm and the book Probabilistic Robotics from Thurn et al.
More informationUAV Navigation: Airborne Inertial SLAM
Introduction UAV Navigation: Airborne Inertial SLAM Jonghyuk Kim Faculty of Engineering and Information Technology Australian National University, Australia Salah Sukkarieh ARC Centre of Excellence in
More informationUsing the Kalman Filter for SLAM AIMS 2015
Using the Kalman Filter for SLAM AIMS 2015 Contents Trivial Kinematics Rapid sweep over localisation and mapping (components of SLAM) Basic EKF Feature Based SLAM Feature types and representations Implementation
More informationSensor Tasking and Control
Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities
More informationCS491/691: Introduction to Aerial Robotics
CS491/691: Introduction to Aerial Robotics Topic: State Estimation Dr. Kostas Alexis (CSE) World state (or system state) Belief state: Our belief/estimate of the world state World state: Real state of
More informationLecture Outline. Target Tracking: Lecture 7 Multiple Sensor Tracking Issues. Multi Sensor Architectures. Multi Sensor Architectures
Lecture Outline Target Tracing: Lecture 7 Multiple Sensor Tracing Issues Umut Orguner umut@metu.edu.tr room: EZ-12 tel: 4425 Department of Electrical & Electronics Engineering Middle East Technical University
More informationThe Scaled Unscented Transformation
The Scaled Unscented Transformation Simon J. Julier, IDAK Industries, 91 Missouri Blvd., #179 Jefferson City, MO 6519 E-mail:sjulier@idak.com Abstract This paper describes a generalisation of the unscented
More informationIntroduction to Probabilistic Graphical Models: Exercises
Introduction to Probabilistic Graphical Models: Exercises Cédric Archambeau Xerox Research Centre Europe cedric.archambeau@xrce.xerox.com Pascal Bootcamp Marseille, France, July 2010 Exercise 1: basics
More informationHere represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.
19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that
More informationDecentralized Detection In Wireless Sensor Networks
Decentralized Detection In Wireless Sensor Networks Milad Kharratzadeh Department of Electrical & Computer Engineering McGill University Montreal, Canada April 2011 Statistical Detection and Estimation
More informationPlanning With Information States: A Survey Term Project for cs397sml Spring 2002
Planning With Information States: A Survey Term Project for cs397sml Spring 2002 Jason O Kane jokane@uiuc.edu April 18, 2003 1 Introduction Classical planning generally depends on the assumption that the
More informationL06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms
L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian
More informationCIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions
CIS 390 Fall 2016 Robotics: Planning and Perception Final Review Questions December 14, 2016 Questions Throughout the following questions we will assume that x t is the state vector at time t, z t is the
More informationUsing covariance intersection for SLAM
Robotics and Autonomous Systems 55 (2007) 3 20 www.elsevier.com/locate/robot Using covariance intersection for SLAM Simon J. Julier a,, Jeffrey K. Uhlmann b a Department of Computer Science, University
More information9 Multi-Model State Estimation
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State
More informationDistributed estimation in sensor networks
in sensor networks A. Benavoli Dpt. di Sistemi e Informatica Università di Firenze, Italy. e-mail: benavoli@dsi.unifi.it Outline 1 An introduction to 2 3 An introduction to An introduction to In recent
More informationBayesian Machine Learning - Lecture 7
Bayesian Machine Learning - Lecture 7 Guido Sanguinetti Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh gsanguin@inf.ed.ac.uk March 4, 2015 Today s lecture 1
More informationCommunication constraints and latency in Networked Control Systems
Communication constraints and latency in Networked Control Systems João P. Hespanha Center for Control Engineering and Computation University of California Santa Barbara In collaboration with Antonio Ortega
More informationOn Scalable Distributed Sensor Fusion
On Scalable Distributed Sensor Fusion KC Chang Dept. of SEO George Mason University Fairfax, VA 3, USA kchang@gmu.edu Abstract - The theoretic fundamentals of distributed information fusion are well developed.
More informationEKF and SLAM. McGill COMP 765 Sept 18 th, 2017
EKF and SLAM McGill COMP 765 Sept 18 th, 2017 Outline News and information Instructions for paper presentations Continue on Kalman filter: EKF and extension to mapping Example of a real mapping system:
More informationBayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.
Kalman Filter Localization Bayes Filter Reminder Prediction Correction Gaussians p(x) ~ N(µ,σ 2 ) : Properties of Gaussians Univariate p(x) = 1 1 2πσ e 2 (x µ) 2 σ 2 µ Univariate -σ σ Multivariate µ Multivariate
More informationSimultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo
Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo Introduction SLAM asks the following question: Is it possible for an autonomous vehicle
More informationRobotics 2 Data Association. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard
Robotics 2 Data Association Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Wolfram Burgard Data Association Data association is the process of associating uncertain measurements to known tracks. Problem
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationProbabilistic Graphical Models
2016 Robert Nowak Probabilistic Graphical Models 1 Introduction We have focused mainly on linear models for signals, in particular the subspace model x = Uθ, where U is a n k matrix and θ R k is a vector
More informationRobotics. Lecture 4: Probabilistic Robotics. See course website for up to date information.
Robotics Lecture 4: Probabilistic Robotics See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Sensors
More informationFusion of Time Delayed Measurements With Uncertain Time Delays
Fusion of Time Delayed Measurements With Uncertain Time Delays Simon J. Julier and Jeffrey K. Uhlmann Abstract In this paper we consider the problem of estimating the state of a dynamic system from a sequence
More informationChapter 1. Root Finding Methods. 1.1 Bisection method
Chapter 1 Root Finding Methods We begin by considering numerical solutions to the problem f(x) = 0 (1.1) Although the problem above is simple to state it is not always easy to solve analytically. This
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationDistributed Systems. Time, Clocks, and Ordering of Events
Distributed Systems Time, Clocks, and Ordering of Events Björn Franke University of Edinburgh 2016/2017 Today Last lecture: Basic Algorithms Today: Time, clocks, NTP Ref: CDK Causality, ordering, logical
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 6.2: Kalman Filter Jürgen Sturm Technische Universität München Motivation Bayes filter is a useful tool for state
More informationRobotics 2 Target Tracking. Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard
Robotics 2 Target Tracking Kai Arras, Cyrill Stachniss, Maren Bennewitz, Wolfram Burgard Slides by Kai Arras, Gian Diego Tipaldi, v.1.1, Jan 2012 Chapter Contents Target Tracking Overview Applications
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More information9 Forward-backward algorithm, sum-product on factor graphs
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 9 Forward-backward algorithm, sum-product on factor graphs The previous
More informationKalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University
Kalman Filter 16-385 Computer Vision (Kris Kitani) Carnegie Mellon University Examples up to now have been discrete (binary) random variables Kalman filtering can be seen as a special case of a temporal
More informationWhy Bayesian? Rigorous approach to address statistical estimation problems. The Bayesian philosophy is mature and powerful.
Why Bayesian? Rigorous approach to address statistical estimation problems. The Bayesian philosophy is mature and powerful. Even if you aren t Bayesian, you can define an uninformative prior and everything
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationLecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations
Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model
More informationArtificial Intelligence
Artificial Intelligence Roman Barták Department of Theoretical Computer Science and Mathematical Logic Summary of last lecture We know how to do probabilistic reasoning over time transition model P(X t
More informationPractical Polar Code Construction Using Generalised Generator Matrices
Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:
More informationLecture 5: GPs and Streaming regression
Lecture 5: GPs and Streaming regression Gaussian Processes Information gain Confidence intervals COMP-652 and ECSE-608, Lecture 5 - September 19, 2017 1 Recall: Non-parametric regression Input space X
More information6.867 Machine learning, lecture 23 (Jaakkola)
Lecture topics: Markov Random Fields Probabilistic inference Markov Random Fields We will briefly go over undirected graphical models or Markov Random Fields (MRFs) as they will be needed in the context
More information2D Image Processing. Bayes filter implementation: Kalman filter
2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche
More informationL11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms
L11. EKF SLAM: PART I NA568 Mobile Robotics: Methods & Algorithms Today s Topic EKF Feature-Based SLAM State Representation Process / Observation Models Landmark Initialization Robot-Landmark Correlation
More informationSLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters
SLAM for Ship Hull Inspection using Exactly Sparse Extended Information Filters Matthew Walter 1,2, Franz Hover 1, & John Leonard 1,2 Massachusetts Institute of Technology 1 Department of Mechanical Engineering
More informationLecture 2: From Linear Regression to Kalman Filter and Beyond
Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation
More informationKalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein
Kalman filtering and friends: Inference in time series models Herke van Hoof slides mostly by Michael Rubinstein Problem overview Goal Estimate most probable state at time k using measurement up to time
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More information6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011
6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student
More informationAnytime Planning for Decentralized Multi-Robot Active Information Gathering
Anytime Planning for Decentralized Multi-Robot Active Information Gathering Brent Schlotfeldt 1 Dinesh Thakur 1 Nikolay Atanasov 2 Vijay Kumar 1 George Pappas 1 1 GRASP Laboratory University of Pennsylvania
More informationIntroduction to Unscented Kalman Filter
Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics
More informationProbability Propagation in Singly Connected Networks
Lecture 4 Probability Propagation in Singly Connected Networks Intelligent Data Analysis and Probabilistic Inference Lecture 4 Slide 1 Probability Propagation We will now develop a general probability
More informationLecture 9. Time series prediction
Lecture 9 Time series prediction Prediction is about function fitting To predict we need to model There are a bewildering number of models for data we look at some of the major approaches in this lecture
More informationWhat is a quantum computer? Quantum Architecture. Quantum Mechanics. Quantum Superposition. Quantum Entanglement. What is a Quantum Computer (contd.
What is a quantum computer? Quantum Architecture by Murat Birben A quantum computer is a device designed to take advantage of distincly quantum phenomena in carrying out a computational task. A quantum
More informationProbabilistic and Bayesian Machine Learning
Probabilistic and Bayesian Machine Learning Day 4: Expectation and Belief Propagation Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/
More informationPhysics 509: Bootstrap and Robust Parameter Estimation
Physics 509: Bootstrap and Robust Parameter Estimation Scott Oser Lecture #20 Physics 509 1 Nonparametric parameter estimation Question: what error estimate should you assign to the slope and intercept
More informationSensor Fusion: Particle Filter
Sensor Fusion: Particle Filter By: Gordana Stojceska stojcesk@in.tum.de Outline Motivation Applications Fundamentals Tracking People Advantages and disadvantages Summary June 05 JASS '05, St.Petersburg,
More informationDecentralised Data Fusion with Exponentials of Polynomials
Decentralised Data Fusion with Exponentials of Polynomials Bradley Tonkes and Alan D. Blair Abstract We demonstrate applicability of a general class of multivariate probability density functions of the
More informationParameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation
Proceedings of the 2006 IEEE International Conference on Control Applications Munich, Germany, October 4-6, 2006 WeA0. Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential
More informationRao-Blackwellized Particle Filter for Multiple Target Tracking
Rao-Blackwellized Particle Filter for Multiple Target Tracking Simo Särkkä, Aki Vehtari, Jouko Lampinen Helsinki University of Technology, Finland Abstract In this article we propose a new Rao-Blackwellized
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More informationCopyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems
Chapter One Introduction 1.1 Large-Scale Interconnected Dynamical Systems Modern complex dynamical systems 1 are highly interconnected and mutually interdependent, both physically and through a multitude
More informationPREDICTIVE quantization is one of the most widely-used
618 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 1, NO. 4, DECEMBER 2007 Robust Predictive Quantization: Analysis and Design Via Convex Optimization Alyson K. Fletcher, Member, IEEE, Sundeep
More informationStat 521A Lecture 18 1
Stat 521A Lecture 18 1 Outline Cts and discrete variables (14.1) Gaussian networks (14.2) Conditional Gaussian networks (14.3) Non-linear Gaussian networks (14.4) Sampling (14.5) 2 Hybrid networks A hybrid
More informationParticle based probability density fusion with differential Shannon entropy criterion
4th International Conference on Information Fusion Chicago, Illinois, USA, July -8, Particle based probability density fusion with differential Shannon entropy criterion Jiří Ajgl and Miroslav Šimandl
More informationA NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1
A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1 Jinglin Zhou Hong Wang, Donghua Zhou Department of Automation, Tsinghua University, Beijing 100084, P. R. China Control Systems Centre,
More informationp L yi z n m x N n xi
y i z n x n N x i Overview Directed and undirected graphs Conditional independence Exact inference Latent variables and EM Variational inference Books statistical perspective Graphical Models, S. Lauritzen
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationLecture 7: Optimal Smoothing
Department of Biomedical Engineering and Computational Science Aalto University March 17, 2011 Contents 1 What is Optimal Smoothing? 2 Bayesian Optimal Smoothing Equations 3 Rauch-Tung-Striebel Smoother
More informationProbabilistic Graphical Models Lecture 17: Markov chain Monte Carlo
Probabilistic Graphical Models Lecture 17: Markov chain Monte Carlo Andrew Gordon Wilson www.cs.cmu.edu/~andrewgw Carnegie Mellon University March 18, 2015 1 / 45 Resources and Attribution Image credits,
More informationDelayed-State Information Filter for Cooperative Decentralized Tracking
29 IEEE International Conference on Robotics and Automation Kobe International Conference Center Kobe, Japan, May 12-17, 29 Delayed-State Information Filter for Cooperative Decentralized Tracking J. Capitán,
More information13 : Variational Inference: Loopy Belief Propagation and Mean Field
10-708: Probabilistic Graphical Models 10-708, Spring 2012 13 : Variational Inference: Loopy Belief Propagation and Mean Field Lecturer: Eric P. Xing Scribes: Peter Schulam and William Wang 1 Introduction
More informationRecap. CS514: Intermediate Course in Operating Systems. What time is it? This week. Reminder: Lamport s approach. But what does time mean?
CS514: Intermediate Course in Operating Systems Professor Ken Birman Vivek Vishnumurthy: TA Recap We ve started a process of isolating questions that arise in big systems Tease out an abstract issue Treat
More information(W: 12:05-1:50, 50-N201)
2015 School of Information Technology and Electrical Engineering at the University of Queensland Schedule Week Date Lecture (W: 12:05-1:50, 50-N201) 1 29-Jul Introduction Representing Position & Orientation
More informationLecture 9: Bayesian Learning
Lecture 9: Bayesian Learning Cognitive Systems II - Machine Learning Part II: Special Aspects of Concept Learning Bayes Theorem, MAL / ML hypotheses, Brute-force MAP LEARNING, MDL principle, Bayes Optimal
More informationStochastic Processes, Kernel Regression, Infinite Mixture Models
Stochastic Processes, Kernel Regression, Infinite Mixture Models Gabriel Huang (TA for Simon Lacoste-Julien) IFT 6269 : Probabilistic Graphical Models - Fall 2018 Stochastic Process = Random Function 2
More informationDiscrete Probability and State Estimation
6.01, Fall Semester, 2007 Lecture 12 Notes 1 MASSACHVSETTS INSTITVTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.01 Introduction to EECS I Fall Semester, 2007 Lecture 12 Notes
More informationStochastic Analogues to Deterministic Optimizers
Stochastic Analogues to Deterministic Optimizers ISMP 2018 Bordeaux, France Vivak Patel Presented by: Mihai Anitescu July 6, 2018 1 Apology I apologize for not being here to give this talk myself. I injured
More informationClock Synchronization
Today: Canonical Problems in Distributed Systems Time ordering and clock synchronization Leader election Mutual exclusion Distributed transactions Deadlock detection Lecture 11, page 7 Clock Synchronization
More informationESTIMATOR STABILITY ANALYSIS IN SLAM. Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu
ESTIMATOR STABILITY ANALYSIS IN SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto, Alberto Sanfeliu Institut de Robtica i Informtica Industrial, UPC-CSIC Llorens Artigas 4-6, Barcelona, 88 Spain {tvidal, cetto,
More informationMobile Robot Localization
Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations
More information12 : Variational Inference I
10-708: Probabilistic Graphical Models, Spring 2015 12 : Variational Inference I Lecturer: Eric P. Xing Scribes: Fattaneh Jabbari, Eric Lei, Evan Shapiro 1 Introduction Probabilistic inference is one of
More informationExact Inference I. Mark Peot. In this lecture we will look at issues associated with exact inference. = =
Exact Inference I Mark Peot In this lecture we will look at issues associated with exact inference 10 Queries The objective of probabilistic inference is to compute a joint distribution of a set of query
More informationConditions for Suboptimal Filter Stability in SLAM
Conditions for Suboptimal Filter Stability in SLAM Teresa Vidal-Calleja, Juan Andrade-Cetto and Alberto Sanfeliu Institut de Robòtica i Informàtica Industrial, UPC-CSIC Llorens Artigas -, Barcelona, Spain
More informationGaussian Process Approximations of Stochastic Differential Equations
Gaussian Process Approximations of Stochastic Differential Equations Cédric Archambeau Centre for Computational Statistics and Machine Learning University College London c.archambeau@cs.ucl.ac.uk CSML
More informationMarkov localization uses an explicit, discrete representation for the probability of all position in the state space.
Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole
More informationLARGE-SCALE TRAFFIC STATE ESTIMATION
Hans van Lint, Yufei Yuan & Friso Scholten A localized deterministic Ensemble Kalman Filter LARGE-SCALE TRAFFIC STATE ESTIMATION CONTENTS Intro: need for large-scale traffic state estimation Some Kalman
More informationPlease bring the task to your first physics lesson and hand it to the teacher.
Pre-enrolment task for 2014 entry Physics Why do I need to complete a pre-enrolment task? This bridging pack serves a number of purposes. It gives you practice in some of the important skills you will
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationMark Gales October y (x) x 1. x 2 y (x) Inputs. Outputs. x d. y (x) Second Output layer layer. layer.
University of Cambridge Engineering Part IIB & EIST Part II Paper I0: Advanced Pattern Processing Handouts 4 & 5: Multi-Layer Perceptron: Introduction and Training x y (x) Inputs x 2 y (x) 2 Outputs x
More informationTrack-to-track Fusion for Multi-target Tracking Using Asynchronous and Delayed Data
Track-to-track Fusion for Multi-target Tracking Using Asynchronous and Delayed Data Master s thesis in Systems, Control and Mechatronics ALEXANDER BERG ANDREAS KÄLL Department of Signals and Systems CHALMERS
More informationSTATE ESTIMATION IN COORDINATED CONTROL WITH A NON-STANDARD INFORMATION ARCHITECTURE. Jun Yan, Keunmo Kang, and Robert Bitmead
STATE ESTIMATION IN COORDINATED CONTROL WITH A NON-STANDARD INFORMATION ARCHITECTURE Jun Yan, Keunmo Kang, and Robert Bitmead Department of Mechanical & Aerospace Engineering University of California San
More informationMemory, Latches, & Registers
Memory, Latches, & Registers 1) Structured Logic Arrays 2) Memory Arrays 3) Transparent Latches 4) How to save a few bucks at toll booths 5) Edge-triggered Registers L13 Memory 1 General Table Lookup Synthesis
More informationSTAT 302 Introduction to Probability Learning Outcomes. Textbook: A First Course in Probability by Sheldon Ross, 8 th ed.
STAT 302 Introduction to Probability Learning Outcomes Textbook: A First Course in Probability by Sheldon Ross, 8 th ed. Chapter 1: Combinatorial Analysis Demonstrate the ability to solve combinatorial
More informationLQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin
LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system
More informationTutorial on Mathematical Induction
Tutorial on Mathematical Induction Roy Overbeek VU University Amsterdam Department of Computer Science r.overbeek@student.vu.nl April 22, 2014 1 Dominoes: from case-by-case to induction Suppose that you
More information5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5682 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Hyperplane-Based Vector Quantization for Distributed Estimation in Wireless Sensor Networks Jun Fang, Member, IEEE, and Hongbin
More information