Time Series Analysis

Similar documents
Time Series Analysis

Lecture Note 12: Kalman Filter

Time Series Analysis

1 Kalman Filter Introduction

Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter

Time Series Analysis

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Lecture 6: Bayesian Inference in SDE Models

Kalman Filter. Man-Wai MAK

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

ECEN 605 LINEAR SYSTEMS. Lecture 8 Invariant Subspaces 1/26

Stochastic Processes

Lecture 19 Observability and state estimation

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

A Theoretical Overview on Kalman Filtering

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

Nonlinear State Estimation! Extended Kalman Filters!

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter and Parameter Identification. Florian Herzog

Data assimilation with and without a model

6.241 Dynamic Systems and Control

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

Kalman Filters, Dynamic Bayesian Networks

COS Lecture 16 Autonomous Robot Navigation

Name of the Student: Problems on Discrete & Continuous R.Vs

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

EE 565: Position, Navigation, and Timing

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Observability and state estimation

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Data assimilation with and without a model

PROBABILISTIC REASONING OVER TIME

E190Q Lecture 11 Autonomous Robot Navigation

Incorporation of Time Delayed Measurements in a. Discrete-time Kalman Filter. Thomas Dall Larsen, Nils A. Andersen & Ole Ravn

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

Partially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS

Online monitoring of MPC disturbance models using closed-loop data

Pure Random process Pure Random Process or White Noise Process: is a random process {X t, t 0} which has: { σ 2 if k = 0 0 if k 0

Analog Signals and Systems and their properties

x 1. x n i + x 2 j (x 1, x 2, x 3 ) = x 1 j + x 3

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

A Study of the Kalman Filter applied to Visual Tracking

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Name of the Student: Problems on Discrete & Continuous R.Vs

X t = a t + r t, (7.1)

Autonomous Navigation for Flying Robots

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

State Observers and the Kalman filter

State Estimation and Motion Tracking for Spatially Diverse VLC Networks

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 5 Linear Quadratic Stochastic Control

ON ENTRY-WISE ORGANIZED FILTERING. E.A. Suzdaleva

EL2520 Control Theory and Practice

Bearings-Only Tracking in Modified Polar Coordinate System : Initialization of the Particle Filter and Posterior Cramér-Rao Bound

State Estimation of Linear and Nonlinear Dynamic Systems

RECURSIVE ESTIMATION AND KALMAN FILTERING

TMA4285 Time Series Models Exam December

unit; 1m The ONJUKU COAST Total number of nodes; 600 Total nimber of elements; 1097 Onjuku Port 5 Iwawada Port No.5 No.2 No.3 No.4 No.

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

Lecture 5 Least-squares

Sparsity in system identification and data-driven control

Proton Therapy: 3D Reconstructions from 2D Images

UCSD ECE153 Handout #40 Prof. Young-Han Kim Thursday, May 29, Homework Set #8 Due: Thursday, June 5, 2011

Statistics 910, #15 1. Kalman Filter

A Tutorial on Recursive methods in Linear Least Squares Problems

Time Series Analysis

A Crash Course on Kalman Filtering

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

Linear Dynamical Systems

State Estimation using Moving Horizon Estimation and Particle Filtering

2D Image Processing. Bayes filter implementation: Kalman filter

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Lecture 2: From Linear Regression to Kalman Filter and Beyond

MATH4406 (Control Theory) Unit 1: Introduction Prepared by Yoni Nazarathy, July 21, 2012

Lecture 7: Optimal Smoothing

Partially Observable Markov Decision Processes (POMDPs)

Manipulators. Robotics. Outline. Non-holonomic robots. Sensors. Mobile Robots

Lecture 9. Time series prediction

1 Why Recursions for Estimation?

Sequential State Estimation (Crassidas and Junkins, Chapter 5)

Summary of lecture 8. FIR Wiener filter: computed by solving a finite number of Wiener-Hopf equations, h(i)r yy (k i) = R sy (k); k = 0; : : : ; m

5 Kalman filters. 5.1 Scalar Kalman filter. Unit delay Signal model. System model

Problem 1: Ship Path-Following Control System (35%)

State estimation and the Kalman filter

Distributed Real-Time Electric Power Grid Event Detection and Dynamic Characterization

CS491/691: Introduction to Aerial Robotics

Robot Localization and Kalman Filters

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

ECE 636: Systems identification

Machine Learning 4771

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener

(q 1)t. Control theory lends itself well such unification, as the structure and behavior of discrete control

ENSC327 Communications Systems 19: Random Processes. Jie Liang School of Engineering Science Simon Fraser University

Transcription:

Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1

Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The Kalman filter: Sec. 10.3 Cursory material: Sec. 10.3.2 (Empirical-Bayesian description) 2

State space models System model; A full description of the dynamical system (i.e. including the parameters) Observations; Noisy measurements on some parts (states) of the system Goal; reconstruct and predict the state of the system Input u t System State: X t Output Y t 3

State space models; examples Estimate the temperature inside a solid block of material when we measure the temperature on the surface (with noise) Noisy measurements of the position of a ship; give a better estimate of the current position A model of a car engine: Input; fuel. State; Fuel and temperature in various parts. Observarions: Sensor output PK/PD-modeling: State: Amount of drug in blood, liver, muscules,... Observations: Amount in blood (with noise), Input: Drug. 4

Determining the model structure The system model is often based on physical considerations; this often leads to dynamical models consisting of differential equations An m th order differential equation can be formulated as m 1st order differential equations Sampling such a system leads to a linear state space model and there exist a way of coming from the coefficients in continuous time to the coefficients in discrete time 5

The linear stochastic state space model System equation: X t = AX t 1 + Bu t 1 + e 1,t Observation equation: Y t = CX t + e 2,t X: State vector Y : Observation vector u: Input vector e 1 : System noise e 2 : Observation noise dim(x t ) = m is called the order of the system {e 1,t } and {e 2,t } mutually independent white noise V [e 1 = Σ 1, V [e 2 = Σ 2 A, B, C, Σ 1, and Σ 2 are known matrices The state vector contains all information available for future evaluation; the state vector is a Markov process 6

Example a falling body Height above ground: z(t) Initial conditions: Position z(t 0 ) and velocity z (t 0 ) Physical considerations: d2 z dt 2 = g States: Position x 1 (t) = z(t) and velocity x 2 (t) = z (t) Only the position is measured y(t) = x 1 (t) Continuous time description x(t) = [x 1 (t) x 2 (t) T : x (t) = [ 0 1 0 0 [ x(t) + y(t) = [ 1 0 x(t) 0 1 g 7

Example a falling body (cont nd) Solving the equations: x 2 (t) = g(t t 0 ) + x 2 (t 0 ) x 1 (t) = g 2 (t t 0) 2 + (t t 0 )x 2 (t 0 ) + x 1 (t 0 ) Sampling: t = st, t 0 = (s 1)T, and T = 1 [ [ 1 1 1/2 x s = x s 1 + 0 1 1 g y s = [ 1 0 x s Adding disturbances and measurement noise: [ [ 1 1 1/2 x s = x s 1 + g + e 1,s 0 1 1 y s = [ 1 0 x s + e 2,s 8

Example a falling body (cont nd) Given measurements of the position at time points 1,2,...,s we could: Predict the future position and velocity x s+k s (k > 0) Reconstruct the current position and velocity from noisy measurements x s s Interpolate to find the best estimate of the position and velocity at a previous time point x s+k s (k < 0) (estimate the path in the state space; vary k so that s + k varied from 1 to s) we will focus on reconstruction and prediction 9

Requirement In order to predict, reconstruct or interpolate the m-dimensional state in the system X t = AX t 1 + Bu t 1 + e 1,t Y t = CX t + e 2,t the system must be observable, i.e. rank [ C T.(CA) T.. ( CA m 1) T = m. For the falling body (S-PLUS): > qr( cbind(t(c), t(c %*% A)) )$rank [1 2 Where A and C is taken from the discrete time description of the system. 10

The Kalman filter Initialization: X1 0 = E [X 1 = µ 0, 1 0 = V [X 1 = V 0, and thereby Σ yy 1 0 = CΣxx 1 0 CT + Σ 2 For: t = 1,2,3,... Reconstruction: ( ) 1 K t = t t 1 CT Σ yy t t 1 X t t = X ( t t 1 + K t Y t C X ) t t 1 t t = t t 1 K tσ yy t t 1 KT t Prediction: X t+1 t t+1 t Σ yy t+1 t = A X t t + Bu t = A t t AT + Σ 1 = C t+1 t CT + Σ 2 11

Multi step predictions Not part of the Kalman filter as stated above Can be calculated recursively for a given t starting with k = 1 for which X t+k t and Σ t+k t are calculated as part of the Kalman filter X t+k+1 t t+k+1 t = A X t+k t + Bu t+k = A t+k t AT + Σ 1 The future input must be decided 12

Naming and history The filter is named after Rudolf E. Kalman, though Thorvald Nicolai Thiele and Peter Swerling actually developed a similar algorithm earlier. It was during a visit of Kalman to the NASA Ames Research Center that he saw the applicability of his ideas to the problem of trajectory estimation for the Apollo program, leading to its incorporation in the Apollo navigation computer. From http://en.wikipedia.org/wiki/kalman_filter 13

The Foundation of the Kalman filter Theorem 2.6 (Linear projection) The theorem is concerned with the random vectors X and Y for which the means, variances and covariances are used The state is called X t and the observation is called Y t and we could write down the theorem for these We have additional information; Y T t 1 = (Y T 1,...,Y T t 1) We include this information by considering the random vectors X t Y t 1 and Y t Y t 1 instead E[(X t Y t 1 ) (Y t Y t 1 ) = E[X t Y t, Y t 1 = E[X t Y t 1 + C[X t,y t Y t 1 V 1 [Y t Y t 1 (Y t E[Y t Y t 1 ) V [(X t Y t 1 ) (Y t Y t 1 ) = V [X t Y t, Y t 1 = V [X t Y t 1 C[X t,y t Y t 1 V 1 [Y t Y t 1 C T [X t,y t Y t 1 14

The Foundation of the Kalman filter (cont nd) E[X t Y t, Y t 1 = E[X t Y t 1 + C[X t,y t Y t 1 V 1 [Y t Y t 1 (Y t E[Y t Y t 1 ) V [X t Y t, Y t 1 = V [X t Y t 1 C[X t,y t Y t 1 V 1 [Y t Y t 1 C T [X t,y t Y t 1 X t t = X t t 1 + Σ xy t t 1 t t = t t 1 Σxy t t 1 K t = Σ xy t t 1 ( ) 1 ( ) Σ yy Y t t 1 t Ŷ t t 1 ( Σ yy t t 1 ( ) 1 Σ yy t t 1 ) 1 ( Σ xy t t 1 K t is called the Kalman gain, because it determine how much the 1-step prediction error influence the update of the state estimate ) T 15

The Foundation of the Kalman filter (cont nd) The 1-step predictions are obtained directly from the state space model: Xt+1 t = A X t t + Bu t Ŷ t+1 t = C X t+1 t Which results in the prediction errors: X t+1 t = X t+1 X t+1 t = A X t t + e 1,t+1 Ỹ t+1 t = Y t+1 Ŷ t+1 t = C X t+1 t + e 2,t+1 And in therefore: t+1 t Σ yy t+1 t = A t t AT + Σ 1 = C t+1 t CT + Σ 2 Σ xy t+1 t can also be calculated 16

Kalman filter applied to a falling body Description of the system: A = [ 1 1 0 1 B = [ 1/2 1 C = [ 1 0 Σ 1 = [ 2.0 0.8 0.8 1.0 Σ 2 = [ 10000 Initialization: Released 10000 m above ground at 0 m/s X 1 0 = [ 10000 0 1 0 = [ 0 0 0 0 Σ yy 1 0 = [ 10000 17

Kalman filter applied to a falling body (cont nd) 1st observation (t = 1): y 1 = 10171 Reconstruction: K 1 = [ 0 0 T X 1 1 = Prediction: [ 9995.09 X 2 1 = 9.82 [ 10000 0 2 1 = [ 2 0.8 0.8 1 1 1 = [ 0 0 0 0 Σ yy 2 1 = [ 10002 18

Kalman filter applied to a falling body (cont nd) 2nd observation (t = 2): y 2 = 10046 Reconstruction: K 2 = [ 0.00020 0.00008 T X 2 2 = [ 9995.1 9.81 2 2 = [ 2 0.8 0.8 1 Prediction: [ 9980.38 X 3 2 = 19.63 3 2 = [ 6.6 2.6 2.6 2 Σ yy 3 2 = [ 10006.6 19

Kalman filter applied to a falling body (cont nd) 3rd observation (t = 3): y 3 = 10082 Reconstruction: K 3 = [ 0.00066 0.00026 T X 3 3 = [ 9980.45 19.6 3 3 = [ 6.59 2.6 2.6 2 Prediction: [ 9955.94 X 4 3 = 29.41 4 3 = [ 15.79 5.4 5.4 3 Σ yy 4 3 = [ 10015.79 20

Falling body the 10 first time points Position (m) 9400 9600 9800 10000 10200 Observed position Reconstructed position Reconstructed velocity 2 4 6 8 10 400 300 200 100 0 Velocity (m/s) 21

Falling body wrong initial state Position (m) 6000 7000 8000 9000 10000 Observed position Reconstructed position Reconstructed velocity 300 200 100 0 Velocity (m/s) 0 5 10 15 20 25 30 22