Nonlinear State Estimation! Extended Kalman Filters!

Similar documents
Nonlinear State Estimation! Particle, Sigma-Points Filters!

Stochastic and Adaptive Optimal Control

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Lecture 6: Bayesian Inference in SDE Models

Linear-Quadratic-Gaussian Controllers!

Nonlinear and/or Non-normal Filtering. Jesús Fernández-Villaverde University of Pennsylvania

Linearized Equations of Motion!

Stochastic Optimal Control!

Optimal Control and Estimation MAE 546, Princeton University Robert Stengel, Preliminaries!

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Adaptive State Estimation Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2018

1 Kalman Filter Introduction

Time Series Analysis

OPTIMAL CONTROL AND ESTIMATION

Optimization-Based Control

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Time Response of Dynamic Systems! Multi-Dimensional Trajectories Position, velocity, and acceleration are vectors

Lecture 7: Optimal Smoothing

The Kalman Filter (part 1) Definition. Rudolf Emil Kalman. Why do we need a filter? Definition. HCI/ComS 575X: Computational Perception.

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Extended Kalman Filter Tutorial

Gaussian Process Approximations of Stochastic Differential Equations

Stochastic Processes

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

9 Multi-Model State Estimation

A new unscented Kalman filter with higher order moment-matching

A Comparitive Study Of Kalman Filter, Extended Kalman Filter And Unscented Kalman Filter For Harmonic Analysis Of The Non-Stationary Signals

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Nonlinear Estimation Techniques for Impact Point Prediction of Ballistic Targets

X t = a t + r t, (7.1)

Return Difference Function and Closed-Loop Roots Single-Input/Single-Output Control Systems

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Constrained State Estimation Using the Unscented Kalman Filter

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Name of the Student: Problems on Discrete & Continuous R.Vs

Data Fusion Kalman Filtering Self Localization

Dual Estimation and the Unscented Transformation

Kalman Filter. Man-Wai MAK

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lagrangian data assimilation for point vortex systems

Time-Invariant Linear Quadratic Regulators!

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang

Minimization with Equality Constraints

Sequential State Estimation (Crassidas and Junkins, Chapter 5)

in a Rao-Blackwellised Unscented Kalman Filter

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

Control Systems! Copyright 2017 by Robert Stengel. All rights reserved. For educational use only.

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

State estimation of linear dynamic system with unknown input and uncertain observation using dynamic programming

The Kalman Filter ImPr Talk

Machine Learning 4771

Extension of the Sparse Grid Quadrature Filter

NONLINEAR BAYESIAN FILTERING FOR STATE AND PARAMETER ESTIMATION

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Name of the Student: Problems on Discrete & Continuous R.Vs

Kalman Filters with Uncompensated Biases

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Bayes Filter Reminder. Kalman Filter Localization. Properties of Gaussians. Gaussians. Prediction. Correction. σ 2. Univariate. 1 2πσ e.

Dynamic data processing recursive least-squares

STATISTICAL ORBIT DETERMINATION

UNIVERSITY OF OSLO Fysisk Institutt. Tracking of an Airplane using EKF and SPF. Master thesis. Rune Jansberg

Lecture 4 - Spectral Estimation

Introduction to Unscented Kalman Filter

A NONLINEARITY MEASURE FOR ESTIMATION SYSTEMS

Expectation propagation for signal detection in flat-fading channels

Introduction to Optimization!

Approximate Inference Part 1 of 2

IN particle filter (PF) applications, knowledge of the computational

Approximate Inference Part 1 of 2

2D Image Processing (Extended) Kalman and particle filter

Filtering and Likelihood Inference

1 h 9 e $ s i n t h e o r y, a p p l i c a t i a n

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

ANNEX A: ANALYSIS METHODOLOGIES

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations

Nonlinear Filtering. With Polynomial Chaos. Raktim Bhattacharya. Aerospace Engineering, Texas A&M University uq.tamu.edu

CSC487/2503: Foundations of Computer Vision. Visual Tracking. David Fleet

Partially Observable Markov Decision Processes (POMDPs)

A Novel Gaussian Sum Filter Method for Accurate Solution to Nonlinear Filtering Problem

Design of Norm-Optimal Iterative Learning Controllers: The Effect of an Iteration-Domain Kalman Filter for Disturbance Estimation

Nonlinear Control Systems

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

PROBABILISTIC REASONING OVER TIME

Organization. I MCMC discussion. I project talks. I Lecture.

Topic 3: Fourier Series (FS)

Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter

2D Image Processing. Bayes filter implementation: Kalman filter

The Scaled Unscented Transformation

Advanced Computational Methods in Statistics: Lecture 5 Sequential Monte Carlo/Particle Filtering

Gaussian Processes for Audio Feature Extraction

UWB Geolocation Techniques for IEEE a Personal Area Networks

Linear-Optimal State Estimation

An insight into noise covariance estimation for Kalman filter design

Analytical Non-Linear Uncertainty Propagation: Theory And Implementation

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Transcription:

Nonlinear State Estimation! Extended Kalman Filters! Robert Stengel! Optimal Control and Estimation, MAE 546! Princeton University, 2017!! Deformation of the probability distribution!! Neighboring-optimal estimator!! Extended Kalman-Bucy filter!! Hybrid extended Kalman filter!! Quasilinear extended Kalman filter Copyright 2017 by Robert Stengel. All rights reserved. For educational use only. http://www.princeton.edu/~stengel/mae546.html http://www.princeton.edu/~stengel/optconest.html 1 Linear Transformation of a Probability Distribution Shape of probability distribution is unchanged = y E y E y! y " " pr( x)dx = y!" = k x pr( x)dx = k x " 2!" y = kx % = 2 y = ( ( y! y ) 2 pr( x)dx! = k 2 ( ( x! x ) 2 pr( x)dx = k 2 2 x! y = k x 2

Linear Transformation of a Gaussian Probability Distribution Probability distribution of y is Gaussian as well pr(x) = 1 2! " x e ( x x ) 2 2" x 2 pr(y) = 1 2! " y e ( y y )2 2 2" y = 1 e ( y kx ) 2 2k 2 2 " x 2! k" x y = kx " 3 E y! y Skew is zero : = k 3 x! x! 3 % = y! y! 3 pr( x)dx pr( x)dx = 0 3 Nonlinear Transformation of a Probability Distribution y = f ( x) y( x) = y( x o ) +!y(!x) = f ( x o ) + " f!x +! "x x=xo!y(!x) " f!x "x x=xo Pr " y( x o ), y( x o +!x) % y x pr y( x o ) = Pr[ x o, x o +!x] f * " %!y = pr " y( x o ) % )!x ( x, x=xo + = pr [ x o ]!x!in the limit " x0 pr % y( x o ) ( = [ ] pr x o () f )x) x=xo 4

Nonlinear Transformation of a Gaussian Probability Distribution y = f ( x) " = y = y E y " pr( x)dx!" = f ( x) pr( x)dx! kx (in general)!" " 2 E y! y! y 2 = " f x %! 2 % = y! y! Skew : E y! y pr( x)dx pr( x)dx! k 2 ( x! x ) 2 (in general) " 3 3 % = y! y! ( 0 (in general) pr( x)dx Probability distribution of y is not Gaussian, but it has a mean and variance 5 State Propagation for Nonlinear Dynamic Systems = f C x t!x t Continuous-time system!",u( t),w( t),t, x 0 t k+1 x( t k+1 ) = x( t k ) + % f C x t t k " x( t k ) + x( t k,t k+1 ) given!",u( t),w( t),t dt Discrete-time system x( t k+1 ) = f D!" x( t k ),u( t k ),w( t k ),t, x( 0) given Explicit numerical x( t k+1 ) = x( t k ) + %f D!" x( t k ),u( t k ),w( t k ),t k summation! x( t k ) + %x( t k,t k+1 ) In both cases, the state propagation can be expressed as! x( t k ) +!x( t k,t k+1 ) x t k+1 Explicit numerical integration 6

Nonlinear Dynamic Systems with Random Inputs and Measurement Error Continuous-time system with random inputs and measurement error = f C x t!x t Structure, f(.), and parameters, p(t), of the nonlinear system are known without error Stochastic effects are additive and small!",u( t),w( t),t, E!" x( 0) given z( t) = h!" x( t),u( t),n( t) E " w( t)w T (! ) % = Q C t! E " n( t)n T (! ) % = R C t! E " w( t)n T (! ) % = 0 Discrete-time system with random inputs and measurement error = f D x( t k ),u t k z( t k ) = h x t k x t k+1!",w( t k ),t, E!" x( 0) given!",u( t k ),n( t k ) = Q k! jk = R k! jk = 0 E w j w k T E n j n k T E w j n k T 7 Nonlinear Propagation of the Mean Underlying deterministic model (either case) = x( t k ) +!f x t k = x( t k ) +!x( t k,t k+1 ) x t k+1 ",u( t k ),w( t k ),t k % Propagation of the mean (continuous- or discrete-time) E!" x( t k+1 )! x( t k+1 ) = E x( t k ) + %x( t k,t k+1 )!" = E!" x t k + E!" %x t k,t k+1! x t k + %x( t k,t k+1 ) State Estimate usually defined as estimate of the mean of associated Markov process 8

Probability Distribution Propagation for Nonlinear Dynamic Systems Consequently x( t k+n ) = x( t k ) +!x( t k ) +!x( t k+1 ) +!+!x( t k+n"1 ) Central limit theorem: probability distribution of x(t k+n ) approaches a Gaussian distribution for large k+n!even if pr "!x( t k +1,t k ) % is not Gaussian Mean and variance are dominant measures of the probability distribution of a system s state, x(t) 9 Nonlinear Propagation of the Covariance! E x( t k+1 )! x ( t ) k+1 P t k+1 = E " x t k %! " x t k % " % x ( k+1 )! x ( ) T { " k+1 % } { + x( t k ) + x( t k ) }{ x ( k ) + x( k ) + x( k ) } T = E! x( t k )! x T (" k )% + E! x( t k )! x(" k )% Second central moment! x( t k )! x( t k ) " x( t k ) = E! x( t k ) +! x( t k ) P t k+1 Covariance matrix " %! " x k % { }{! x (( k ) +! x(( k ) } T { } + E {! x( t k ) %! x T ( " )} + E {! x( t k k )%! x(" k ) }! P t k % + M( t k ) + M T ( t k ) + P( t k ) % % 10

Gaussian and Non-Gaussian Probability Distributions!! (Almost) all random variables have means and standard deviations!! Minimizing estimate error covariance tends to minimize the spread of the error in many (but not all) non- Gaussian cases!! Central Limit Theorem implies that estimate errors tend toward normal distribution 11 Neighboring-Optimal Estimator! 12

Neighboring-Optimal Estimator!x ( t) = f!" x( t),u( t),w( t),t Assume z( t) = h!" x( t) + n( t)! Nominal solution exists! Disturbance and measurement errors are small! State stays close to the nominal solution! Mean and variance are good approximators of probability distribution x o ( t),u o ( t),w o t!x o ( t) +!!x ( t) = f x o ( t),u o ( t),w o t ( t) +!z( t) = h x o ( t) z o known in! 0,t f ",t % + " F( t)!x( t) + G( t)!u( t) + L( t)!w( t) % " % + n o ( t) + " H( t)!x( t) +!n( t) % " Jacobian matrices evaluated along the nominal path F(t)!!f [!x x o(t),u o (t),w o (t)]; G(t)!!f [!u x o(t),u o (t),w o (t)]; L(t)!!f [!w x o(t),u o (t),w o (t)] 13 Neighboring-Optimal Estimator Estimate the perturbation from the nominal path!ˆ! x( t) = F(t)!ˆx ( t) + G(t)!u(t) + K C ( t)!z( t) " H t!ˆx ( t) % K C (t) is the LTV Kalman-Bucy gain matrix LTV Kalman filter could be used to estimate the state at discrete instants of time ˆx ( t)! x Nom ( t) +!ˆx ( t) 14

where Neighboring-Optimal Estimator Example (from Gelb, 1974) Radar tracking a falling body (one dimension)! "!x 1 t!x 2 t!x 3 t! x 2 ( t)! = d g ; % " 0 % " z t Drag = d =!V 2 ( t) 2" t x 1 x 2 x 3! altitude = velocity ballistic coefficient, ( % " % = x 1 ( t) + n( t) =!x 2 2 ( t) ( t) 2x 3 Density =! =! o e altitude/k =! o e x 1( t)/k 15 Estimate Error for Example (from Gelb, 1974) P( 0) = " w( t) = 0 500 0 0 0 20, 000 0 0 0 2.5!10 5 % Pre-computed nominal trajectory Filter gains also may be pre-computed Position and velocity errors diverge Ballistic coefficient estimate error exceeds filter estimate Example of parameter estimation 16

Extended Kalman-Bucy Filter! 17 Extended Kalman-Bucy Filter = f x t = h x( t)!x t z t!",u( t),w( t),t!" + n t Assume! No nominal solution is reliable or available! Disturbances and measurement errors may not be small F(t)!! f [! x x o(t),u o (t),w o (t)]; G(t)!! f [! u x o(t),u o (t),w o (t)]; L(t)!! f [! w x (t),u (t),w (t) o o o ]; H(t)!! h! x x (t),u (t),w (t) o o o [ ] 18

Extended Kalman-Bucy Filter In the estimator! Replace the linear dynamic model by the nonlinear model! Compute the filter gain matrix using the linearized model! Make linear update to the state estimate propagated by the nonlinear model 19 Extended Kalman- Bucy Filter State Estimate: Nonlinear propagation plus linear correction = f ˆx t ˆ!x t x( t k+1 ) = ˆx ( t k ) + f ˆx t K C!",u( t),t + K C t t k+1 t k!",u( t),t + K C t Filter Gain { } z( t) % h ˆx ( t)!" { } dt z( t) % h ˆx ( t) ( t) = P( t)h T [ ˆx(t),u(t) ]R!1 C ( t)!" 20

Extended Kalman- Bucy Filter Covariance Estimate!P ( t) = F[ ˆx(t),u(t) ]P( t) + P( t)f T [ ˆx(t),u(t) ] + L[ ˆx(t),u(t) ]Q C ( t)l T [ ˆx(t),u(t) ]! K C ( t)h[ ˆx(t),u(t) ]P t Linear Kalman-Bucy filter! State estimate is affected by the covariance estimate! Covariance estimate is not affected by the state estimate! Consequently, the covariance estimate is unaffected by the output, z(t) Extended Kalman-Bucy filter! State estimate is affected by the covariance estimate! Covariance estimate is affected by the state estimate! Therefore, the covariance estimate is affected by the output, z(t) 21 Extended Kalman-Bucy Filter Example (from Gelb, 1974) Early tracking error is large Position, velocity, and ballistic coefficient errors converge to estimated bounds Filter gains must be computed on-line 22

Hybrid Extended Kalman Filter Numerical integration of continuous-time propagation equations ˆx t k (!) State Estimate ( ) " % = x " t + k!1 % + f " ˆx " + %,u % d t k t k!1 P t k (!) Covariance Estimate ( ) t k " % = P" t ( + ) k!1 % + " F ( )P( ) + P ( FT ) + L ( )Q C ( )L T ( ) % d t k!1 Jacobian matrices must be calculated 23 K t k Hybrid Extended Kalman Filter = P t k (!) ˆx t k ( + ) Recursive discrete-time estimate updates " % HT t k H t k!" = ˆx!" t k % + K t k P t k ( + ) Filter Gain!1 " P" t k (!) % ( HT t k ) + R( t k ) % State Estimate (+) { } z( t k ) % h ˆx t k (%) Covariance Estimate (+)!" =!" I % K t n ( k )H( t k ) P!" t k (%)!" 24

Iterated Extended Kalman Filter (Gelb, 1974) Re-apply the update equations to the updated solution to improve the estimate before proceeding Re-linearize output matrix before each new update ˆx k,i+1 ( + ) = ˆx k (!) + K k,i z k! h ˆx k,i ( + ) ˆx k,0 ( + ) = ˆx k (!) Arbitrary of iterations: i = 0,1,! State Estimate (+) { " %! H k " ˆx k,i ( + ) % " ˆx k (!)! ˆx k,i ( + ) % }, Filter Gain { }!1 T K k,i = P k (!)H k " ˆx k,i ( + ) % H " k ˆx k,i ( + ) % P! T H k k " ˆx k,i ( + ) % + R k Covariance Estimate (+) P k,i+1 + { " % }P k! = I n! K k,i H k ˆx k,i ( + ) 25 z t k Two-Dimensional Example (from Gelb, 1974) Same falling sphere dynamics, with offset radar 2 { " % } 1/2 + n t = r 2 1 ( t k ) + x ( t )! r k 2 26

Comparison of Filter Results (from Gelb, 1974) Comparison of alternative nonlinear estimators 100-trial Monte Carlo evaluation Wishner, Automatica, 1969 Second-order filter includes additional terms in f[.] and h[.] 27 Quasilinearization (Describing Functions)! 28

Quasilinearization True linearization: slope of a nonlinear curve at the evaluation point Quasilinearization: amplitude-dependent slope of a nonlinear curve at the evaluation point Describing function: quasilinear function of affine form: Describing Function = Bias + Scale Factor( x! x o ) Saturation (Limiting) Nonlinearity 29 Comparison of Clipped Sine Wave with Describing Function Approximation 30

Deterministic Describing Function of the Saturation Function Describing function depends on wave form and amplitude of the input Saturation function y = f ( x) = % a, x! a x, " a < x < a "a, x "a Describing function input = Nonlinear function input Sinusoidal input Clipped sine wave x( t) = Asin!t y( t) = f [ Asin!t ] = % ( a, x " a Asin!t, a < x < a a, x a 31 Sinusoidal-Input Describing Function of the Saturation Function (from Graham and McRuer, 1961) Approximate nonlinear function by linear function f ( x)! d 0 + d 1 ( x " x o ) Most readily calculated as the first term of a Fourier series for y(t) For symmetric input (x 0 = 0) to symmetric nonlinearity, d 0 = 0, and d 1 = 2A ) a! sin"1 % A ( + a % A ( 1 " a + % A ( * + 2,. -. a : Saturation limit A : Input amplitude 32

Sinusoidal-Input Describing Function of the Saturation Function y D Describing function output 0 2 ( t) = d 1 sin!t = 2A * a " sin1 % A( ) + a % A ( ) 1 a 1, % A ( ) +, 32 See Describing Function Analysis of Nonlinear Simulink Models in Simulink Control Design 3.1 2-4 2 / 5sin!t. / 62 33 Nonlinearity Introduces Harmonics in Output Synthesized Cello Tone 34

y D Harmonic Describing Functions of Saturation Fourier series of symmetrically clipped sine wave includes symmetric harmonic terms ( t) = d 1 sin!t + d 3 sin 3!t + " 3 + d 5 sin( 5!t + " 5 ) +! Graham and McRuer, 1961 35 Describing Function Derived from Expected Values of Input and Output Describing Function = Bias + Scale Factor( x! x o ) Approximate nonlinear function by linear function f ( x)! d 0 + d 1 ( x " x o ) Statistical representation of fit error J = E f ( x)! d 0! d ( 1 x! x ) 2 {" o % } Minimize fit error to find d 0 and d 1!J!d 0 = 0;!J!d 1 = 0 36

Random-Input Describing Function Bias and scale factor f ( x) d 1 = E!xf ( x) [ ] d 0 = E!" % d 1 E!x!x " x! x o by elimination d 0 = E!"!x2 E!" f ( x) % E [!x ]E! "!xf ( x ) E!"!x 2 % E 2!x [ ] [ ]! " % d 0 E!x E!"!x 2 d 1 = E! "!xf ( x) % E [!x ]E! " f ( x) E!"!x 2 % E 2!x [ ] 37 Random-Input Describing Function!x f x If and both have zero mean d 0 = 0 d 1 = E! "!xf x = E xf x E!"!x 2 E!" x 2 Describing function for symmetric function Describing Function = d 0 + d 1 = E " xf x % x E " x 2 %! " ( x! x o ) = d 1 x 38

Random-Input Describing Function for the Saturation Function (from Graham and McRuer, 1961) Describing function input: White noise with standard deviation, x( t)! N ( 0,! )! Zero-mean white noise with standard deviation,!! y D Describing function output ( t) = E! "!xf x x t E!"!x 2 = d 1 x( t) = erf ( where the error function, erf(.), is erf z = 2, e-. 2 d. z / 0 a ) 2% * + x t 39 Comparison of Clipped White Noise with Describing Function Approximation 40

Random-Input and Sinusoidal Describing Functions of the Saturation Function Describing Function, db General shapes are similar x 1 substituted for a in the figures 41 Multivariate Describing Functions Let f(x) be a nonlinear vector function of a vector The quasilinear describing function approximation is! b + D( x " ˆx ) where ˆx = E( x) f x Cost function = Trace of the error covariance matrix J = E Tr " f ( x)! b! D ( x! ˆx ) % " f ( x)! b! D ( x! ˆx ) T " { % }% {! b! D ( x! ˆx )! b! D ( x! ˆx ) } = E " Tr " f x T % " f x % % Quasilinear extended Kalman-Bucy filter! F, G, and L replaced by describing function matrices 42

Describing Function Matrices for State Estimation Minimize fit error to find b and D!J!b = 0;!J!D = 0 Describing function bias (see text) b = E!" f x! ˆf x Describing function scaling matrix is a function of the covariance inverse (see text) D = E{!" f ( x)!x T % E!" f ( x)!x T }P %1 where!x = x % ˆx; P = E!x!x T 43 Monte Carlo Comparison of Quasilinear Filter with Extended Kalman Filter and Three Others (from Gelb, 1974) =!sin x( t) + w( t) = 0.5sin( 2x k ) + n k!x t z t Phaneuf, MIT PhD thesis, 1968) 44

Next Time:! Sigma Points (Unscented Kalman) Filters! plus Brief Introduction to Particle, Batch Least-Squares, Backward-Smoothing, Gaussian Mixture Filters! 45 Supplemental Material! 46

P t k (!) Quasilinear Filter Propagation Equations True linearization Jacobians replaced by quasilinear (describing function) Jacobians ˆx t k (!) " % = x " t k!1 + % + f " ˆx " + %,u % d t k D F : ( n! n) Stability matrix of f ( x) containing describing function elements t k t k!1 " % = P" t k!1 ( + ) % + " D F ( )P( ) + P ( )D T F ( ) + D L ( )Q C ( )D T L ( ) % d t k!1 Covariance propagation is state-dependent 47 Quasilinear System Stability Matrix True linearization of nonlinear system equation!x o ( t) +!!x ( t) " f x o t,u o ( t),t % + F x o ( t),u o ( t),t %!x t = f x o ( t),u o ( t),t % + F( t)!x( t) +" +" Quasilinearization of nonlinear system equation Some or all elements of stability matrix are state-dependent!x o ( t) +!!x ( t) " f x o t,u o ( t),t % + F x o ( t),u o ( t),t %!x t +" = f x o ( t),u o ( t),t % + D F E!x( t) %,x o ( t),u o ( t),t %!x t +" 48

Quasilinear Filter Gain and Updates = P t k (!) K t k " % D T H Filter Gain ( t k ) D H t k P" t k (!) % D T!1 " H ( t k ) + R( t k ) % ˆx t k ( + )!" = ˆx!" t k % + K t k P t k ( + ) State Estimate Update { } z( t k ) % h ˆx t k (%) Covariance Estimate Update!"!" =!" I % K t n ( k )D H ( t k ) P!" t k (%) D H : Output matrix of h( x) containing describing function elements 49