Estimation and Prediction Scenarios

Similar documents
Statistics 910, #15 1. Kalman Filter

Adaptive ensemble Kalman filtering of nonlinear systems

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

X t = a t + r t, (7.1)

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Data Assimilation: Finding the Initial Conditions in Large Dynamical Systems. Eric Kostelich Data Mining Seminar, Feb. 6, 2006

2D Image Processing. Bayes filter implementation: Kalman filter

Endogenous Information Choice

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012

Linear Models for Regression CS534

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

Probabilistic Graphical Models

Statistical and Adaptive Signal Processing

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

Kriging Luc Anselin, All Rights Reserved

2D Image Processing. Bayes filter implementation: Kalman filter

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

Signal Processing - Lecture 7

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Fundamentals of Data Assimilation

The Instability of Correlations: Measurement and the Implications for Market Risk

Introduction to Mobile Robotics Probabilistic Robotics

Introduction. Spatial Processes & Spatial Patterns

Derivation of the Kalman Filter

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

Lessons in Estimation Theory for Signal Processing, Communications, and Control

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

ARMA Estimation Recipes

Probabilistic Graphical Models

Solutions to Homework Set #6 (Prepared by Lele Wang)

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Best prediction in linear models with mixed integer/real unknowns: theory and application

Better Simulation Metamodeling: The Why, What and How of Stochastic Kriging

Chapter 6. Random Processes

A TERM PAPER REPORT ON KALMAN FILTER

Introduction to Probability and Stocastic Processes - Part I

M.Sc. in Meteorology. Numerical Weather Prediction

Kalman Filter and Ensemble Kalman Filter

Mixed-Models. version 30 October 2011

Brian J. Etherton University of North Carolina

Feb 21 and 25: Local weighted least squares: Quadratic loess smoother

Adaptive Data Assimilation and Multi-Model Fusion

9 Multi-Model State Estimation

Gaussian Filtering Strategies for Nonlinear Systems

SIMON FRASER UNIVERSITY School of Engineering Science

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

7. The Multivariate Normal Distribution

M.Sc. (MATHEMATICS WITH APPLICATIONS IN COMPUTER SCIENCE) M.Sc. (MACS)

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

1. Determine if each of the following are valid autocorrelation matrices of WSS processes. (Correlation Matrix),R c =

Kalman filter using the orthogonality principle

Linear Dynamical Systems

Consistent Bivariate Distribution

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

F & B Approaches to a simple model

1 Data Arrays and Decompositions

Unit 10: Simple Linear Regression and Correlation

Regression. Oscar García

Frequentist-Bayesian Model Comparisons: A Simple Example

A time series is called strictly stationary if the joint distribution of every collection (Y t

Approximating likelihoods for large spatial data sets

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

MID-TERM EXAM ANSWERS. p t + δ t = Rp t 1 + η t (1.1)

Gaussian processes. Basic Properties VAG002-

6.4 Kalman Filter Equations

Reliability and Risk Analysis. Time Series, Types of Trend Functions and Estimates of Trends

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Local Ensemble Transform Kalman Filter

data lam=36.9 lam=6.69 lam=4.18 lam=2.92 lam=2.21 time max wavelength modulus of max wavelength cycle

Ensemble Data Assimilation and Uncertainty Quantification

Modern Methods of Data Analysis - WS 07/08

Multivariate Geostatistics

Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo

CS 532: 3D Computer Vision 6 th Set of Notes

Lifting Detail from Darkness

CS491/691: Introduction to Aerial Robotics

OPTIMAL CONTROL AND ESTIMATION

A SIGNAL THEORETIC INTRODUCTION TO RANDOM PROCESSES

Stochastic population forecast: a supra-bayesian approach to combine experts opinions and observed past forecast errors

Least Squares Estimation Namrata Vaswani,

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

6.435, System Identification

Interpretation of two error statistics estimation methods: 1 - the Derozier s method 2 the NMC method (lagged forecast)

Linear Methods for Regression. Lijun Zhang

Factor Analysis and Kalman Filtering (11/2/04)

Covariance Matrix Simplification For Efficient Uncertainty Management

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

On dealing with spatially correlated residuals in remote sensing and GIS

Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter

Linear Prediction Theory

Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping

Sensor Tasking and Control

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

OPTIMAL ESTIMATION of DYNAMIC SYSTEMS

Transcription:

Recursive BLUE BLUP and the Kalman filter: Estimation and Prediction Scenarios Amir Khodabandeh GNSS Research Centre, Curtin University of Technology, Perth, Australia IUGG 2011, Recursive 28 June BLUE-BLUP 7 July 2011, and the Melbourne, Kalman filter Australia 1

1. Basic concepts of Prediction Content An introductory example (Bi variate sampled data) Best prediction within different dff classes of statistics 2. Best linear prediction (BLP) BLP and its examples BLP based Kalman filter and its limitations 3. Bestlinear unbiased prediction (BLUP) 2

4. BLUE BLUP recursion Content Initialization (Prediction = Estimation) Time update Measurement update 5. Summaryand and concluding remarks 3

Basic concepts of Prediction Histogram (Empirical density) of bivariate data Joint Histogram of sampled data Marginal Histograms [Weight] [Height] 4

Basic concepts of Prediction Histograms of for a given sampled value of Empirical conditional density [Weight] [Height] 5

Basic concepts of Prediction Bivariate sampled data set of size 5000 Empirical conditional mean Case 1: Guessing variable based on its mean Case 2: Guessing variable based on its mean conditioned on a given value of 6

Basic concepts of Prediction Mean squared error (MSE) Case 2 Case 1 Mean value =0.5492 Mean value =1.0049 7

Basic concepts of Prediction Estimation Guessing the value of an unknown parameter ( ) describing the distribution of a random vector ( ) :Sample value of 8

Basic concepts of Prediction Prediction Guessing an outcome of an un observable random vector ( ) using an observable random vector ( ) 9

Basic concepts of Prediction Prediction error: Class of all statistics statistic Best predictor (BP): Solution: Limitations: Information on Conditional PDF should be available BP is generally a non linear predictor 10

Basic concepts of Prediction Prediction error: Class of affine statistics statistic We restrict to the class of affine statistics, at the cost of Best linear predictor (BLP): Solution: Do we still, in practice, need to further restrict the class of affine statistics? 11

Best linear prediction (BLP) Example: random signal extraction : A signal with known sinusoidal mean : The observed signal with ihmeasurement noise: Perfect observation: 1.5 1 0.5 0 Observed signal 1.5 1 0.5 0 Predicted signal 0.5 0.5 1 1 1.5 0 20 40 60 80 100 120 1.5 0 20 40 60 80 100 120 12

Best linear prediction (BLP) Example: random signal extraction No observation: : A signal with known sinusoidal mean : The observed signal with ihmeasurement noise: 3 1.5 2 1 0 Observed signal 1 0.5 0 Predicted signal 1 2 3 0 20 40 60 80 100 120 0.5 1 1.5 0 20 40 60 80 100 120 13

Best linear prediction (BLP) Example: interpolation and extrapolation (A zero mean signal with no measurement error) Exponential auto covariance function : 2D position of the signal 14

Best linear prediction (BLP) Case 1: A sparse grid of observations True Signal Predicted Signal 15

Best linear prediction (BLP) Case 2: A dense grid of observations True Signal Predicted Signal 16

Best linear prediction (BLP) The BLP is applicable to any linear model, thus the model underlying Kalman filter Kalman filter structure in batch form *That thesystem noises areuncorrelated in time makes therecursionpossible in thetime updatetime *That the measurement noises are uncorrelated in time makes the recursion possible in the measurement update together with Recursion gets feasible 17

Best linear prediction (BLP) Limitations: The BLP based Kalman filter requires information on and In most applications, the information on the mean and initial uncertainty of the state vector is not available! (UNKNOWN!) 18

Best linear prediction (BLP) An ad hoc solution: Given a user defined value of, we set the elements of to sufficiently large values! Two unresolved problems: Diffuse filter 1) To what extent should the initial uncertainty take large values, in order to practically fulfil? 2) Would the first problem (Prediction part) be solved, one still needs to determine the unknown mean (Estimation part). 19

Best linear unbiased prediction (BLUP) We further restrict to the class of unbiased linear statistics The price to pay for such simplification: Underlying Model observable un observable, with unknown 20

Best linear unbiased prediction (BLUP) Class of linear unbiased statistics: Prediction error: Estimation error: is a square invertible matrix Solution: 21

BLUE BLUP recursion Innovation process: Using an invertible transformation to transform misclosures to the group wise uncorrelated misclosures decomposition Block diagonal Block diagonal 22

Statistics of the uncorrelated misclosures: BLUE BLUP recursion The innovations are the prediction error of the misclosures In case of partitioned linear models: Through a proper basis matrix Predicted residuals! 23

BLUE BLUP recursion Kalman dynamic model and the simplified assumptions: Dynamic model The time link between the to be predicted variables Transition matrix ti System noise Simplified assumptions: Zero mean noise Initial variable is uncorrelated with the system noises System noises are uncorrelated with the measurement noises Uncorrelated system noises in time 24

BLUE BLUP recursion Measurement model and the simplified assumptions: Measurement model The link between the observables and the to be predicted variables State vector Measurement noise Simplified assumptions: Zero mean noise Initial i variable is uncorrelated with the observations Uncorrelated observations in time 25

BLUE BLUP recursion The BLUP is applicable to any linear model, thus the model underlying Kalman filter Partitioned form 26

BLUE BLUP recursion Initialization : The linear unbiased statistic is proposed The misclosure vector of the initial model : Prediction Estimation a square invertible matrix since since 27

BLUE BLUP recursion Initialization : Prediction = Estimation *The solution is independent of the initial uncertainty Although the solutions are identical, but their quality is judged in a different way! Prediction Error variance matrices Estimation 28

BLUE BLUP recursion Time update:, 29

BLUE BLUP recursion Time update: *The BLUP (BLUE) of a linear function is the linear function of the BLUP (BLUE) zero Prediction Estimation Error variance matrices zero Prediction Estimation 30

Measurement update: BLUE BLUP recursion, 31

BLUE BLUP recursion Measurement update: Predicted residuals *The BLUP of a linear function is the linear function of the BLUP zero 32

Measurement update: BLUE BLUP recursion In case of prediction, we propose the linear unbiased statistic In case of estimation, we propose the linear unbiased statistic Prediction Estimation 33

BLUE BLUP recursion Recursion of and Initialization: Time update: Measurement update: Prediction gain matrix: Estimation gain matrix: 34

BLUE BLUP recursion Recursive algorithm Collecting the estimator and the predictor into one single state vector Initialization: with 35

BLUE BLUP recursion Recursive algorithm Collecting the estimator and the predictor into one single state vector Time update: with 36

BLUE BLUP recursion Recursive algorithm Collecting the estimator and the predictor into one single state vector Measurement update: with 37

Block diagram of the algorithm BLUE BLUP recursion BLUE BLUP 38

Summary and concluding remarks Prediction ::: when observations are used to guess a random vector Estimation ::: when observations are used to guess an unknown nonrandom vector Best predictors within different classes of statistics: BLP based Kalman filter requires the initial uncertainty of the state vector, whereas the BLUP based one is independent of it. 39

Summary and concluding remarks BLUE recursion cannot stand on its own, since it requires the predicted residuals andtherefore the predicted state vector Theestimation estimation gain matrix isrelated to the prediction gain matrix as, which both become identical if BLUE=BLUP when the system noise is absent 40