Recursive BLUE BLUP and the Kalman filter: Estimation and Prediction Scenarios Amir Khodabandeh GNSS Research Centre, Curtin University of Technology, Perth, Australia IUGG 2011, Recursive 28 June BLUE-BLUP 7 July 2011, and the Melbourne, Kalman filter Australia 1
1. Basic concepts of Prediction Content An introductory example (Bi variate sampled data) Best prediction within different dff classes of statistics 2. Best linear prediction (BLP) BLP and its examples BLP based Kalman filter and its limitations 3. Bestlinear unbiased prediction (BLUP) 2
4. BLUE BLUP recursion Content Initialization (Prediction = Estimation) Time update Measurement update 5. Summaryand and concluding remarks 3
Basic concepts of Prediction Histogram (Empirical density) of bivariate data Joint Histogram of sampled data Marginal Histograms [Weight] [Height] 4
Basic concepts of Prediction Histograms of for a given sampled value of Empirical conditional density [Weight] [Height] 5
Basic concepts of Prediction Bivariate sampled data set of size 5000 Empirical conditional mean Case 1: Guessing variable based on its mean Case 2: Guessing variable based on its mean conditioned on a given value of 6
Basic concepts of Prediction Mean squared error (MSE) Case 2 Case 1 Mean value =0.5492 Mean value =1.0049 7
Basic concepts of Prediction Estimation Guessing the value of an unknown parameter ( ) describing the distribution of a random vector ( ) :Sample value of 8
Basic concepts of Prediction Prediction Guessing an outcome of an un observable random vector ( ) using an observable random vector ( ) 9
Basic concepts of Prediction Prediction error: Class of all statistics statistic Best predictor (BP): Solution: Limitations: Information on Conditional PDF should be available BP is generally a non linear predictor 10
Basic concepts of Prediction Prediction error: Class of affine statistics statistic We restrict to the class of affine statistics, at the cost of Best linear predictor (BLP): Solution: Do we still, in practice, need to further restrict the class of affine statistics? 11
Best linear prediction (BLP) Example: random signal extraction : A signal with known sinusoidal mean : The observed signal with ihmeasurement noise: Perfect observation: 1.5 1 0.5 0 Observed signal 1.5 1 0.5 0 Predicted signal 0.5 0.5 1 1 1.5 0 20 40 60 80 100 120 1.5 0 20 40 60 80 100 120 12
Best linear prediction (BLP) Example: random signal extraction No observation: : A signal with known sinusoidal mean : The observed signal with ihmeasurement noise: 3 1.5 2 1 0 Observed signal 1 0.5 0 Predicted signal 1 2 3 0 20 40 60 80 100 120 0.5 1 1.5 0 20 40 60 80 100 120 13
Best linear prediction (BLP) Example: interpolation and extrapolation (A zero mean signal with no measurement error) Exponential auto covariance function : 2D position of the signal 14
Best linear prediction (BLP) Case 1: A sparse grid of observations True Signal Predicted Signal 15
Best linear prediction (BLP) Case 2: A dense grid of observations True Signal Predicted Signal 16
Best linear prediction (BLP) The BLP is applicable to any linear model, thus the model underlying Kalman filter Kalman filter structure in batch form *That thesystem noises areuncorrelated in time makes therecursionpossible in thetime updatetime *That the measurement noises are uncorrelated in time makes the recursion possible in the measurement update together with Recursion gets feasible 17
Best linear prediction (BLP) Limitations: The BLP based Kalman filter requires information on and In most applications, the information on the mean and initial uncertainty of the state vector is not available! (UNKNOWN!) 18
Best linear prediction (BLP) An ad hoc solution: Given a user defined value of, we set the elements of to sufficiently large values! Two unresolved problems: Diffuse filter 1) To what extent should the initial uncertainty take large values, in order to practically fulfil? 2) Would the first problem (Prediction part) be solved, one still needs to determine the unknown mean (Estimation part). 19
Best linear unbiased prediction (BLUP) We further restrict to the class of unbiased linear statistics The price to pay for such simplification: Underlying Model observable un observable, with unknown 20
Best linear unbiased prediction (BLUP) Class of linear unbiased statistics: Prediction error: Estimation error: is a square invertible matrix Solution: 21
BLUE BLUP recursion Innovation process: Using an invertible transformation to transform misclosures to the group wise uncorrelated misclosures decomposition Block diagonal Block diagonal 22
Statistics of the uncorrelated misclosures: BLUE BLUP recursion The innovations are the prediction error of the misclosures In case of partitioned linear models: Through a proper basis matrix Predicted residuals! 23
BLUE BLUP recursion Kalman dynamic model and the simplified assumptions: Dynamic model The time link between the to be predicted variables Transition matrix ti System noise Simplified assumptions: Zero mean noise Initial variable is uncorrelated with the system noises System noises are uncorrelated with the measurement noises Uncorrelated system noises in time 24
BLUE BLUP recursion Measurement model and the simplified assumptions: Measurement model The link between the observables and the to be predicted variables State vector Measurement noise Simplified assumptions: Zero mean noise Initial i variable is uncorrelated with the observations Uncorrelated observations in time 25
BLUE BLUP recursion The BLUP is applicable to any linear model, thus the model underlying Kalman filter Partitioned form 26
BLUE BLUP recursion Initialization : The linear unbiased statistic is proposed The misclosure vector of the initial model : Prediction Estimation a square invertible matrix since since 27
BLUE BLUP recursion Initialization : Prediction = Estimation *The solution is independent of the initial uncertainty Although the solutions are identical, but their quality is judged in a different way! Prediction Error variance matrices Estimation 28
BLUE BLUP recursion Time update:, 29
BLUE BLUP recursion Time update: *The BLUP (BLUE) of a linear function is the linear function of the BLUP (BLUE) zero Prediction Estimation Error variance matrices zero Prediction Estimation 30
Measurement update: BLUE BLUP recursion, 31
BLUE BLUP recursion Measurement update: Predicted residuals *The BLUP of a linear function is the linear function of the BLUP zero 32
Measurement update: BLUE BLUP recursion In case of prediction, we propose the linear unbiased statistic In case of estimation, we propose the linear unbiased statistic Prediction Estimation 33
BLUE BLUP recursion Recursion of and Initialization: Time update: Measurement update: Prediction gain matrix: Estimation gain matrix: 34
BLUE BLUP recursion Recursive algorithm Collecting the estimator and the predictor into one single state vector Initialization: with 35
BLUE BLUP recursion Recursive algorithm Collecting the estimator and the predictor into one single state vector Time update: with 36
BLUE BLUP recursion Recursive algorithm Collecting the estimator and the predictor into one single state vector Measurement update: with 37
Block diagram of the algorithm BLUE BLUP recursion BLUE BLUP 38
Summary and concluding remarks Prediction ::: when observations are used to guess a random vector Estimation ::: when observations are used to guess an unknown nonrandom vector Best predictors within different classes of statistics: BLP based Kalman filter requires the initial uncertainty of the state vector, whereas the BLUP based one is independent of it. 39
Summary and concluding remarks BLUE recursion cannot stand on its own, since it requires the predicted residuals andtherefore the predicted state vector Theestimation estimation gain matrix isrelated to the prediction gain matrix as, which both become identical if BLUE=BLUP when the system noise is absent 40