I Next J Prev J Begin I End Main purpose of this chapter Ξ Derive the single-stage predictor and the general state predictor. Ξ Introduce the innovati

Similar documents
V o l u m e 5, N u m b e r 5 2, 1 6 P a g e s. Gold B e U ClUt Stamps Double Stamp D a y E v e r y Wednesday

A b r i l l i a n t young chemist, T h u r e Wagelius of N e w Y o r k, ac. himself with eth

6.4 Kalman Filter Equations

LOWELL WEEKLY JOURNAL.

CS 532: 3D Computer Vision 6 th Set of Notes

Introduction to Unscented Kalman Filter

TUCBOR. is feaiherinp hit nest. The day before Thanks, as to reflect great discredit upon that paper. Clocks and Jewelry repaired and warranted.

LOWELL JOURNAL. DEBS IS DOOMED. Presldrtit Cleveland Write* to the New York Democratic Rilltors. friends were present at the banquet of

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

4 Derivations of the Discrete-Time Kalman Filter

THE I Establiifrad June, 1893

Lessons in Estimation Theory for Signal Processing, Communications, and Control

The Kalman Filter ImPr Talk

Daily Register Q7 w r\ i i n i i /"\ HTT /"\I AIM MPmf>riAnpn ^ ^ oikiar <mn ^^^*»^^*^^ _.. *-.,..,» * * w ^.. i nr\r\

Procedure for Measuring Swing Bearing Movement on 300 A Series Hydraulic Excavators {7063}

LOWELL WEEKLY JOURNAL

State Estimation of Linear and Nonlinear Dynamic Systems

14 - Gaussian Stochastic Processes

EE 565: Position, Navigation, and Timing

G r a y T a k e s P o s t. A s N e w Y S e c ' y

Dual Estimation and the Unscented Transformation

KEEP PACE WITH. Volume 3, Number.38, 12 Pages. at Michigan State university.

Sequential Estimation in Linear Systems with Multiple Time Delays

Parametric Signal Modeling and Linear Prediction Theory 4. The Levinson-Durbin Recursion

Differencing Revisited: I ARIMA(p,d,q) processes predicated on notion of dth order differencing of a time series {X t }: for d = 1 and 2, have X t

Pithy P o i n t s Picked I ' p and Patljr Put By Our P e r i p a tetic Pencil Pusher VOLUME X X X X. Lee Hi^h School Here Friday Ni^ht

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Simultaneous Localization and Mapping (SLAM) Corso di Robotica Prof. Davide Brugali Università degli Studi di Bergamo

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

Kalman Filters with Uncompensated Biases

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification

Optimal Interpolation

Recursive Estimation

Convergence and Mean Square Stability of Suboptimal Estimator for Systems With Measurement Packet Dropping REFERENCES

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

wi& be demonstrated. Ilexbert Arkro,

Lecture 7: Linear Prediction

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz

X t = a t + r t, (7.1)

Least Squares SVM Regression

the robot in its current estimated position and orientation (also include a point at the reference point of the robot)

Urban Expressway Travel Time Prediction Method Based on Fuzzy Adaptive Kalman Filter

A Comparitive Study Of Kalman Filter, Extended Kalman Filter And Unscented Kalman Filter For Harmonic Analysis Of The Non-Stationary Signals

State Observers and the Kalman filter

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

Fixed-interval smoothing algorithm based on covariances with correlation in the uncertainty

semi-annual Activities Fair, aimed at extra-cwrricular activities which the School

A Hybrid Time-delay Prediction Method for Networked Control System

State Estimation using Moving Horizon Estimation and Particle Filtering

Widely Linear Estimation and Augmented CLMS (ACLMS)

26. Filtering. ECE 830, Spring 2014

Linear Dynamical Systems (Kalman filter)

Dynamic Programming. Chapter The Basic Problem. Dynamics and the notion of state

Optimal State Estimation for Boolean Dynamical Systems using a Boolean Kalman Smoother

SPIRITUALISM. forces. of Spirit, A n stiy a e d f r o m a C o m m o n rhey. n o d and H en so S ta n d p o in t. Lea d s i 1 T U A L I.S M.

INTRODUCTION Noise is present in many situations of daily life for ex: Microphones will record noise and speech. Goal: Reconstruct original signal Wie

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Probe fire that killed 23

CCNY. BME I5100: Biomedical Signal Processing. Stochastic Processes. Lucas C. Parra Biomedical Engineering Department City College of New York

f;g,7k ;! / C+!< 8R+^1 ;0$ Z\ \ K S;4 i!;g + 5 ;* \ C! 1+M, /A+1+> 0 /A+>! 8 J 4! 9,7 )F C!.4 ;* )F /0 u+\ 30< #4 8 J C!

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

INHOMOGENEOUS, NON-LINEAR AND ANISOTROPIC SYSTEMS WITH RANDOM MAGNETISATION MAIN DIRECTIONS

Quaternion based Extended Kalman Filter

Motion Model Selection in Tracking Humans

Lecture 7: Optimal Smoothing

9 Multi-Model State Estimation

Open Access Target Tracking Algorithm Based on Improved Unscented Kalman Filter

Handling parametric and non-parametric additive faults in LTV Systems

WE are ufforlug ai special prices

RECURSIVE ESTIMATION AND KALMAN FILTERING

Statistics 910, #15 1. Kalman Filter

SIMON FRASER UNIVERSITY School of Engineering Science

A Multiple Target Range and Range-Rate. Tracker Using an Extended Kalman Filter and. a Multilayered Association Scheme

Sequence labeling. Taking collective a set of interrelated instances x 1,, x T and jointly labeling them

A WORD To V eteran Spiritualists.

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

LOWELL JOURNAL. LOWSLL, MICK., WSSDNaSDAY", FEB 1% 1894:

Controllability and Observability: Tools for Kalman Filter Design

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Levinson Durbin Recursions: I

H r# W im FR ID A Y, :Q q ro B E R 1 7,.,1 0 1 S. NEPTUNE TH RHE. Chancelor-Sherlll Act. on Ballot at ^yisii/

Levinson Durbin Recursions: I

VU Signal and Image Processing

Cumulative Retrospective Cost Adaptive Control with RLS-Based Optimization

Speaker Tracking and Beamforming

Högn, Czado: Theoretical Foundations of Autoregressive Models for Time Series on Acyclic Directed Graphs

Statistical Filtering and Control for AI and Robotics. Part II. Linear methods for regression & Kalman filtering

Multivariant analysis of drinking water quality parameters of lake Pichhola in Udaipur, India

Math for Machine Learning Open Doors to Data Science and Artificial Intelligence. Richard Han

ANALYSIS OF LINEAR STOCHASTIC SYSTEMS

Optimal control and estimation

Cost Analysis of Square Root Information Filtering and Smoothing with a Mixed Real-Integer State

Computer Intensive Methods in Mathematical Statistics

State Estimation for Nonlinear Systems using Restricted Genetic Optimization

Wayfarer Traveler. The. Laura. Most of us enjoy. Family and multi-generational travel. The Luxury of Togetherness. Happy Traveling, Owner s

Existence of Optimal Strategies in Markov Games with Incomplete Information

Sequential State Estimation with Interrupted Observation

Solving Nonlinear Matrix Equation in Credit Risk. by Using Iterative Methods

Optimal Distributed Lainiotis Filter

Transcription:

Main purpose of this chapter Derive the single-stage predictor and the general state predictor. Introduce the innovation process and check its properties.

Introduction Three types of state estimation : ^x(kjn ) 1. prediction (N < k) 2. filtering (N = k) 3. smoothing (N > k) N : the total number of available measurements k : the current time point. In this chapter, we develop algorithms for mean-squared predicted estimates, ^x MS (kjj) of state x(k). Note that, in prediction, k > j.

Single-stage predictor Starting from the fundamental theorem of estimation theory ^x(kjk 1) = Efx(k)jZ(k 1)g where Z(k 1) = [z(1);z(2); ;z(k 1)] T. Single-stage predicted estimate Applying a linear operation Ef jz(k 1)g to x(k) = Φ(k; k 1)x(k 1) + (k; k 1)w(k 1) + Ψ(k; k 1)u(k 1); we obtain ^x(kjk 1) = Φ(k; k 1)^x(k 1jk 1) + Ψ(k; k 1)u(k 1): =) ^x(k 1jk 1) : Filtered estimate (covered in Chapter 17.) =) Filtered and predicted state estimates are very tightly coupled together.

=) m ~x (kjk 1) = 0 ( in Property 1 in Ch:13) Single-stage predictor The error covariance of single-stage predicted estimate : P (kjk 1) P (kjk 1) = Ef[~x(kjk 1) m ~x (kjk 1)][~x(kjk 1) m ~x (kjk 1)] T g where ~x(kjk 1) = x(k) ^x(kjk 1). P (kjk 1) = Ef~x(kjk 1)~x T (kjk 1)g = Φ(k; k 1)P (k 1jk 1)Φ T (k; k 1) + (k; k 1)Q(k 1) T (k; k 1) * ~x(kjk 1) = Φ(k; k 1)~x(k 1jk 1) + (k; k 1)w(k 1) Initializing the single stage predictor ^x(0j0) = Efx(0)jno measurementsg = m x (0) P (0j0) = Ef~x(0j0)~x T (0j0)g At starting time, ^x(0j0) and P (0j0) are required.

P (kjj) = Φ(k; k 1)P (k 1jj)Φ T (k; k 1)+ (k; k 1)Q(k 1) T (k; k 1) General state predictor Objective Determine ^x(kjj); k > j, i.e., predicted value of x(k) at the current time j based on the filtered state estimation ^x(jjj) and its error-covariance matrix P (jjj) = Ef~x(jjj)~x T (jjj)g. General state predictor 1. u(k) is deterministic, Theorem Pk If = Φ(k; j)^x(jjj) + ^x(kjj) a. Φ(k; i)ψ(i; i 1)u(i 1), k > j b. The vector random Seq. f~x(kjj);k = j + 1;j + 2; g is 1. zero mean 2. Gaussian 3. first-order Markov 4. governed by

kx Φ(k; i)[ Efw(i 1)jZ(j)g + Ψ Efu(i 1)jZ(j)g] Efw(i 1)jZ(j)g = Efw(i 1)g = 0 Efu(i 1)jZ(j)g = Efu(i 1)g = u(i 1) Pk = Φ(k; j)^x(jjj) + Φ(k; i)ψ(i; i 1)u(i 1) ^x(kjj) =) General state predictor proof : : solution to state equation is a Pk The Φ(k; j)x(j) + = x(k) Φ(k; i)[ (i; i 1)w(i 1) + Ψ(i; i 1)u(i 1)]. By taking linear operation, i.e.,ef jz(j)g, we obtain the following equation : ^x(kjj) = Φ(k; j)^x(jjj) + Since Z(j) depends on x(0) and w(j 1);w(j 2); ;w(0), Efw(i 1)jZ(j)g = Efw(i 1)jw(0);w(1); ;w(j 1);x(0)g: For i > j,

=) ~x(kjj) = Φ(k; j)~x(jjj) + kx Φ(k; i)[ (i; i 1)w(i 1) + Ψ(i; i 1)u(i 1)] kx Φ(k; i)ψ(i; i 1)u(i 1) kx Φ(k; i) (i; i 1)w(i 1) =) ~x(kjj) = Φ(k; k 1)~x(k 1jj) + (k; k 1)w(k 1) (1) General state predictor b : Check properties of f~x(kjj);k = j + 1;j + 2; g 1. zero mean (= unbiased property of MSE ( property 1 in Ch. 13 ) 2. Gaussian (= Gaussianity of MSE ( property 4 in Ch. 13 ) 3. Markov : Recall the following equations : x(k) = Φ(k; j)x(j) + ^x(kjj) = Φ(k; j)^x(jjj) + ~x(kjj) = x(k) ^x(kjj) * j = k 1

P (kjj) = 1 p ^x(kj0) + 1) = 1 p x(k ^x(k 1j0) P (kj0) = 1 2 P (k 1j0) + 25 for k > 1: General state predictor 4. Error covariance of predicted states using (1) = Ef~x(kjj)~x T (kjj)g = Φ(k; k 1)P (k 1jj)Φ T (k; k 1) + (k; k 1)Q(k 1) T (k; k 1) Note : j = k 1 =) ^x(kjk 1) (single-stage predictor) Example 16.1 Consider the following first-order system and the noise covariance : x(k) + w(k); q = 25: 2 Recursive equation for the predicted state and its error covariance are 2 Simulations for P (0j0) = 6 and P (0j0) = 100 are shown in p.232 of textbook. =) For large value k, the predictor becomes insensitive to P (0j0).

~z(k + 1jk) 4 = z(k + 1) ^z(k + 1jk) ~z(k + 1jk) = z(k + 1) ^z(k + 1jk) P ~z ~z (k + 1jk) = H(k + 1)P (k + 1jk)H T (k + 1) + R(k + 1) The innovations process Innovations process : ~z(k + 1jk) where ^z(k + 1jk) = Efz(k + 1)jZ(k)g = EfH(k + 1)x(k + 1) + v(k + 1)jZ(k)g = H(k + 1)^x(k + 1jk): The representation and properties of the innovations process Theorem 2. a. = z(k + 1) H(k + 1)^x(k + 1jk) (2) = H(k + 1)~x(k + 1jk) + v(k + 1) b. ~z(k + 1jk) is zero-mean Gaussian white noise sequence, and

Ef~x(k + 1jk)g = Efv(k + 1)g = 0 =) Ef~z(k + 1jk)g = 0 We show that Ef~z(i + 1ji)~z T (j + 1jj)g = P ~z ~z (i + 1ji)ffi ij The innovations process proof : a: is easily proved. So we will prove only b. 1. Zero mean: 2. Gaussian : z(k +1) and ^x(k + 1jk) are Gaussian =) ~z(k +1jk) is Gaussian by (2). 3. White noise : For the case i > j, Ef~z(i + 1ji)~z T (j + 1jj)g = Ef[H(i + 1)~x + v][h(i + 1)~x + v] T g = EfH ~x[h ~x + v] T g + Efv[H ~x + v] T g

Efv(i + 1)v T (j + 1)g = 0 The innovations process Using the following relation Efv(i + 1)~x T (j + 1jj)g = Efv(i + 1)g Ef~x T (j + 1jj)g = 0; we continue as follows: Ef~z(i + 1ji)~z T (j + 1jj)g = H(i + 1)Ef~x[z H ^x] T g by (2) = H(i + 1)Ef~x(i + 1ji) [z(j + 1) H(j + 1)^x(j + 1jj)] T g = 0 (* orthogonality principle, i.e., Ef[ ^ MS (k)]f (Z(k))g = 0) For the case i = j, P ~z ~z (i + 1ji) = Ef[H ~x + v][h ~x + v] T g = H(i + 1) P (i + 1ji)H T (i + 1) + R(i + 1) (* Efv(i + 1)~x(i + 1ji)g = 0)

^y(k M ) = MX i=1 MX i=1 c M;i y(k i + 1) Supplementary Material For stationary discrete-time random sequence 1. Forward linear prediction ^y(k) = a M;i y(k i) such that Ef(y(k) ^y(k)) 2 g is minimized. 2. Backward linear prediction (smoothing or interpolation) such that Ef(y(k M ) ^y(k M )) 2 g is minimized. Lattice architecture 1. "not" mean-squared state estimation 2. easy extension of the stages of Lattice : no effects of reflection coefficients