State observers and recursive filters in classical feedback control theory

Similar documents
APPPHYS 217 Thursday 8 April 2010

Linear First-Order Equations

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

Schrödinger s equation.

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

Diagonalization of Matrices Dr. E. Jacobs

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

The Principle of Least Action

Implicit Differentiation

The Exact Form and General Integrating Factors

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Euler equations for multiple integrals

Calculus in the AP Physics C Course The Derivative

6 General properties of an autonomous system of two first order ODE

Separation of Variables

1 Heisenberg Representation

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.

A Course in Machine Learning

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 1B, lecture 8: Integration by parts

ELEC3114 Control Systems 1

Lectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs

'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21

6.003 Homework #7 Solutions

MA 2232 Lecture 08 - Review of Log and Exponential Functions and Exponential Growth

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Integration Review. May 11, 2013

Slide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)

and from it produce the action integral whose variation we set to zero:

Calculus of Variations

Lagrangian and Hamiltonian Mechanics

Calculus and optimization

Chapter 11: Feedback and PID Control Theory

Experimental Robustness Study of a Second-Order Sliding Mode Controller

Quantum Mechanics in Three Dimensions

Designing Information Devices and Systems II Fall 2017 Note Theorem: Existence and Uniqueness of Solutions to Differential Equations

Accelerate Implementation of Forwaring Control Laws using Composition Methos Yves Moreau an Roolphe Sepulchre June 1997 Abstract We use a metho of int

THE VAN KAMPEN EXPANSION FOR LINKED DUFFING LINEAR OSCILLATORS EXCITED BY COLORED NOISE

Chapter 11: Feedback and PID Control Theory

Table of Common Derivatives By David Abraham

APPPHYS 217 Tuesday 6 April 2010

PD Controller for Car-Following Models Based on Real Data

Total Energy Shaping of a Class of Underactuated Port-Hamiltonian Systems using a New Set of Closed-Loop Potential Shape Variables*

Differentiation ( , 9.5)

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

θ x = f ( x,t) could be written as

Vectors in two dimensions

u t v t v t c a u t b a v t u t v t b a

G j dq i + G j. q i. = a jt. and

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

The Press-Schechter mass function

Unit #6 - Families of Functions, Taylor Polynomials, l Hopital s Rule

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Math 1271 Solutions for Fall 2005 Final Exam

05 The Continuum Limit and the Wave Equation

Parameter estimation: A new approach to weighting a priori information

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

d dx But have you ever seen a derivation of these results? We ll prove the first result below. cos h 1

Design A Robust Power System Stabilizer on SMIB Using Lyapunov Theory

Matrix Recipes. Javier R. Movellan. December 28, Copyright c 2004 Javier R. Movellan

Chapter 2 Lagrangian Modeling

First Order Linear Differential Equations

Mathematical Review Problems

Quantum mechanical approaches to the virial

Calculus of Variations

Unit #4 - Inverse Trig, Interpreting Derivatives, Newton s Method

Least-Squares Regression on Sparse Spaces

Differentiability, Computing Derivatives, Trig Review. Goals:

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.

Optimization of Geometries by Energy Minimization

Linear and quadratic approximation

Optimal Variable-Structure Control Tracking of Spacecraft Maneuvers

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx

0.1 Differentiation Rules

Introduction to variational calculus: Lecture notes 1

Chapter 11: Feedback and PID Control Theory

inflow outflow Part I. Regular tasks for MAE598/494 Task 1

SYNCHRONOUS SEQUENTIAL CIRCUITS

Nonlinear Adaptive Ship Course Tracking Control Based on Backstepping and Nussbaum Gain

Physics 251 Results for Matrix Exponentials Spring 2017

Separation Principle for a Class of Nonlinear Feedback Systems Augmented with Observers

EE 370L Controls Laboratory. Laboratory Exercise #7 Root Locus. Department of Electrical and Computer Engineering University of Nevada, at Las Vegas

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Lecture 10: Logistic growth models #2

16.30/31, Fall 2010 Recitation # 1

Predictive control of synchronous generator: a multiciterial optimization approach

From Local to Global Control

APPROXIMATE SOLUTION FOR TRANSIENT HEAT TRANSFER IN STATIC TURBULENT HE II. B. Baudouy. CEA/Saclay, DSM/DAPNIA/STCM Gif-sur-Yvette Cedex, France

Advanced Partial Differential Equations with Applications

Exponential Tracking Control of Nonlinear Systems with Actuator Nonlinearity

Experiment 2, Physics 2BL

Differentiability, Computing Derivatives, Trig Review

Mathcad Lecture #5 In-class Worksheet Plotting and Calculus

The Levitation Controller Design of an Electromagnetic Suspension Vehicle using Gain Scheduled Control

Left-invariant extended Kalman filter and attitude estimation

Calculus Class Notes for the Combined Calculus and Physics Course Semester I

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Nested Saturation with Guaranteed Real Poles 1

Applications of the Wronskian to ordinary linear differential equations

Transcription:

State observers an recursive filters in classical feeback control theory State-feeback control example: secon-orer system Consier the riven secon-orer system q q q u x q x q x x x x Here u coul represent an external applie force (in a mechanical mass-spring-amper system) or voltage (in the LCR circuit realization) Anticipating conventional control-theoretic notation that we will introuce below let us also write this as x Ax Bu A B Recall that the amping ratio sets important properties of the step (transient) response: u 5 u(t) re =/3 green = blue =4/3 black 5-5 5 5 5 3 t/ We can write a general solution for the initial-value problem for such a system via the matrix exponential x t x expat x t x where methos for computing expat can be foun in elementary linear algebra textbooks (the case is slightly tricky): :expat e t sinht cosht sinht sinht sinht cosht :expat :expat e t t e t te t e t e t t e t e t t e t e t sint cost sint sint sint cost Let us briefly consier the linear state-feeback control scenario u Kx where (since we have taken u to be scalar) K k k

With this feeback law we have where A BK x Ax Bu A BKx k k k k k k Clearly we can view this as a moifie secon-orer system If we efine then apparently we have k k k k an it follows that we can actually set to any value we like through choice of the feeback parameter k an once this is set (assuming ) we can ajust as esire through choice of k Hence if we on t like the transient behavior of our original open-loop system (without feeback) we can in principle moify it however we like by use of state feeback as escribe above We program the esire behavior by setting the feeback gain K Note that if accurate instantaneous measurements of the components of x are available as electronic signals it is straightforwar to prouce the feeback signal u with an op-amp circuit The fact that we have full power to reprogram the oscillation frequency an amping ratio in this state feeback scenario can be mae obvious if we look at the ynamics with feeback transforme back to a single secon-orer ODE: q q q k k q q k q k q Hence k is associate with a moification of the harmonic restoring force while k irectly an inepenently moifies the velocity-amping Just for fun let us calculate the eigenvalues of the feeback-moifie A matrix: eta BK I k k k k k Consiering the solutions given by the quaratic equation k k k 4 4k we see that we can achieve any esire pair of eigenvalues by setting k an k k 4 4k k 4 k 4 4 4

Hence we fin that we can arbitrarily program the eigenvalues of the close-loop ynamics matrix a esign metho calle pole placement Before turning to some generalities we note that the secon-orer system with the control structure we are consiering can be pushe into arbitrary states in a similar manner as the ouble-integrator Suppose we are starting at the origin at t an want to reach q q at a target time T If we use the input u an allow our input signals to be unboune we can in principle take the following approach First give the system a momentum impulse at time t that will cause the system to pass through position q at time T To convince ourselves that this is really possible we coul look at the matrix exponential solution an show that we can always solve for q such that In the overampe case for example this leas to q q expat q e T sinh T Then at time T apply a secon momentum impulse that corrects the final velocity q to its esire value Note that even with this rather singular strategy in which we rely on our ability to apply controls so strong that they overwhelm most of the system s natural ynamics (harmonic restoring force an velocity-amping) we still rely on the inherent integrator structure q q that allows us to use an input to q only to affect the position q q Reachability rank conition We say that a linear system ẋ Ax Bu is reachable if for any initial state x x esire final state x f an target time T it is possible to fin a control input ut t T that steers the system to reach xt x f There is a theorem (see A&M pp 67-7) that says that a system is reachable if its reachability matrix W r B AB A B A n B has full rank (here x R n ) If a system is reachable it is furthermore possible to solve the pole placement problem in which we want to esign a state feeback law u Kx such that we can pick any eigenvalues we want for the controlle ynamics ẋ Ax Bu A BKx Here A an B are given an we must fin a K to achieve the esire eigenvalues for A BK There is a very convenient Matlab Control Toolbox function place that can be use to accomplish this numerically In the case of our secon-orer system with A B we have AB W r an clearly W r has full rank as etw r Before moving on let s look at a simple example (from Åström an Murray Ex 53) of a system that is not reachable: Here we can easily compute x x x x u 3

W r which clearly has eterminant zero This can be unerstoo by noting the complete symmetry of the way that u moifies the evolution of x an x For example if x x there is no way to use u to achieve x T x T at any later time State feeback versus output feeback Note that in our iscussion of stabilization an pole-placement so far we have assume that it makes sense to esign a control law of the form u Kx This is calle a state feeback law since in orer to etermine the control input ut at time t we generally nee to have full knowlege of the state xt In practice this is often not possible an thus we usually specify the available output signals when efining a control esign problem: ẋ Ax Bu y Cx Here the output signal yt which can in principle be a vector of any imension represents the information about the evolving system state that is mae available to the controller via sensors An output feeback law must take the form ut f y t where in general we can allow ut to epen on the entire history of y with t (more on this below an later in the course) Output feeback is a natural setting for practical applications For example if we are talking about cruise control for an automobile x may represent a complex set of variables having to o with the internal state of the engine wheels an chassis while y is only a reaout from the speeometer Hopefully it will seem natural that it is usually prohibitively ifficult to install a sensor to monitor every coorinate of the system s state space an also that it will often be unnecessary to o so (cruise control electronics can function quite well with just the car s spee) One simple example of a system in which full state knowlege is clearly not necessary is (asymptotic) stabilization of a simple harmonic oscillator If the natural ynamics of the plant is mẍ kx an our actuation mechanism is to apply forces irectly on the mass then the control system looks like x x k m (where x is now the position an x the velocity) We can clearly make the equilibrium point at the origin asymptotically stable via the linear state-feeback control law x u bx b x which makes the overall equation of motion ẍ m k x bẋ which we recognize as a ampe harmonic oscillator Thus it is clear that the controller only nees to know the velocity of the oscillator in orer to implement a successful feeback strategy So even if we go to a SISO (single-input single-output) formulation of this problem ẋ Ax Bu y Cx we are obviously fine for any C of the form C since x y/ an we can implement an output-feeback law of the form u bx b y Clearly if C is a square matrix an y has the same imension as x output feeback is essentially equivalent to state feeback if C is invertible As a generalization of what we i for the simple harmonic oscillator above we coul just esign a state feeback controller K set x x u 4

x C y an apply feeback u Kx KC y However this is a special case an not the sort of convenience we want to count on! State estimation At this point it might seem like we might nee completely new theorems about reachability an pole-placement for output-feeback laws when ut is only allowe to epen on y t However it turns out that we can buil naturally on our previous results by appealing to a separation metho The basic iea is that we will try to construct a proceure for processing the ata y t to obtain an estimate x t of the true system state xt an then apply a feeback law u Kx base on this estimate This can be possible even when C is not invertible (not even square) The controller thus assumes the structure of a ynamical system itself with yt as its input ut as its output an x t as its internal state There are various ways of esigning state estimators to extract x t from y t of which we will iscuss two an there is also a convenient proceure for etermining whether or not y contains enough information to make full state reconstruction possible in principle The latter test looks a lot like the test for reachability for not acciental reasons Let s start by thinking about the simple harmonic oscillator again We note that in orer to asymptotically stabilize the equilibrium point at the origin it woul be most convenient to have an output signal that tol us irectly about its velocity x However you may have alreay realize that in a scenario with C y x it shoul be simple to obtain a goo estimate of x via x y This is certainly a vali proceure for estimating x although in practice one shoul be wary of taking erivatives of measure ata since that tens to accentuate high-frequency noise In a similar spirit we note that for any ynamical system ẋ Ax Bu y Cx if we hol u at zero we can make use of the general relations ẏ Cẋ CAx ÿ Cẍ C ẋ C Ax CAẋ CA x n n y CAn x If we look at how this applies to our moifie simple harmonic oscillator example with C A k m we have y C CA x x x k m ẏ CA x x x an we start to get a sense for how the natural ynamics A can move information about state space variables into the support of C Hopefully it shoul thus seem reasonable that in orer for a system to be observable we require that the observability matrix 5

W o have full rank Informally if a system is observable then we are guarantee that we can esign a proceure (such as the erivatives scheme above) to extract a faithful estimate x from y However it will generally be necessary to monitor y for some time (an with goo accuracy) before the estimation error x t xt x t can be mae small In the erivatives scheme for instance we can t estimate high erivatives of yt until we see enough of it to get an accurate etermination of its slope curvature etc C CA CA CA n State observer with innovations A more common (an more robust) metho for estimating x from y is to construct a state observer that applies corrections to an initial guess x until Cx becomes an accurate preictor of y Suppose that at some arbitrary point in time t we have an estimate x t How shoul we upate this estimate to generate estimates of the state xt with t t? Most simply we coul integrate x Ax Bu assuming we know A an B for the plant It is generally assume that we know u since this signal is uner our control! Then we notice that the estimation error x evolves as x x x Ax Bu Ax Bu Ax x Ax Hence this strategy has the nice feature that if A is stable limx t meaning that our estimate will eventually converge to the true system state Note that this works even if B an u are non-zero What if we are not so lucky as to have sufficiently stable natural ynamics A? As mentione above a goo strategy is to try to apply corrections to x at every time step in proportion to the so-calle innovation w y Cx Here y Cx is the error we make in preicting yt on the basis of x t Clearly when x is small so is w A Luenberger state observer can thus be constructe as x Ax Bu Ly Cx where L is a gain matrix that is left to our esign This observer equation results in x ẋ x Ax Bu Ax Bu Ly Cx Ax x Ly Cx Ax x LCx x A LCx Hence we see that our esign task shoul be to choose L givena an C such that A LC has nice stable eigenvalues This shoul remin you immeiately of the pole-placement problem in state feeback in which we wante to choose K given A an B such that A BK ha esire eigenvalues Inee one can map between the two problems by noting that the transpose of a matrix M T has the same eigenvalues as M Thus we can view our observer esign problem as being the choice of L T such that A LC T A T C T L T has nice stable eigenvalues an this now has precisely the same structure as before Inee there is a complete 6

uality between state feeback an observer esign with corresponences A A T B C T K L T W r W ot Hence it shoul be clear for example how the Matlab function place that we mentione above can be use also for observer esign An as long as the observability matrix has full rank we are guarentee to be able to fin an L such that A LC has arbitrary esire eigenvalues Pole-placement with output feeback As iscusse in section 73 of Åström an Murray (http://wwwcscaltecheu/~murray/amwiki/inexphp/main_page) the following theorem hols (here we simplify to the r case): For a system ẋ Ax Bu y Cx the controller escribe by u Kx x Ax Bu Ly Cx gives a close-loop system with the characteristic polynomial etsi A BKetsI A LC This polynomial can be assigne arbitrary roots if the system is observable an reachable The overall setup is summarize in the following cartoon: Next we nee to examine how this sort of strategy performs in the presence of noise State observers in the presence of noise: some examples We begin by illustrating some of the concepts we introuce above on state observers for linear systems Consier as our plant a simple harmonic oscillator: x x u x x x Ax Bu where as usual x q x q The ynamics as given fix A an B an let us consier a general output signal relate linearly to the state: 7

y Cx C C x x We have seen that the observability criterion is that we have full rank for the matrix an since we have W o C CA CA C C etw et C C C C C C C C Hence as long as the system is observable as long as C is nonzero For general C our linear (Luenberger) observer structure is x Ax Bu Ly Cx which inuces the following ynamics for the estimation error: x A LCx We thus want to esign L to make the eigenvalues of A LC have negative real part We note that we coul o this by using Matlab s pole-placement routine place with A A T B C T K L T We try the following examples setting an esigning for eigenvalues : 3 3 C : L Ly Cx x x C : L 3 Ly Cx 3 x x C : L Ly Cx x x x x To examine how the observers work we perform some numerical integrations Setting u an x we have for the plant evolution xt expatx cost sint cost sint sint cost cost sint For the observer we assume no knowlege of the initial state an thus set x The ynamics of the state estimate is x x x Ly LCx For the purposes of this example we coul actually integrate this analytically treating y as a riving term However LC x x Ly 8

in the spirit of recursive state estimation we instea integrate numerically (Matlab example) Ttot; Nsteps5; tlinspace(ttotnsteps); t()-t(); x[cos(t)sin(t);cos(t)-sin(t)]; xhatzeros(nsteps); for ii:nsteps yc*x(:ii); xhat(:ii) xhat(:ii-) *(A*xhat(:ii-)L*(y-C*xhat(:ii-))); en; In the following we plot the results with x as soli black x as soli re x as ashe black an x as ashe re The results for C : 5 5-5 - -5 3 4 5 6 7 8 9 The results for C : 5 5-5 - -5 3 4 5 6 7 8 9 The results for C with y in blue: 9

5 5-5 - -5-3 4 5 6 7 8 9 Coul we have guesse the forms of these observers? For C note that we can construct an intuitive observer by setting x y x ẏ which shoul work well as long as there is negligible measurement noise For C we obviously have x y an we can guess x t sys but it is not entirely clear how we shoul choose x Note that the Luenberger observer oes something more complicate as it integrates x LC x x Ly A x Ly A 3 expa t e t e t e t e t e t e t e t e t e t e t with x arbitrary x t expa t x t s exp A sly e t e t e t e t e t e t e t e t x There oesn t seem to be an obvious intuitive strategy for C t s 4e s 5e s y 5e s e s Before turning to consier noisy observation scenarios we take a brief look at the behavior of the Luenberger observer for C an varying gain Recall that when we esigne for eigenvalues we obtaine L What happens if we try simply multiplying this gain by? 3

5 5-5 - -5-3 4 5 6 7 8 9 If instea we esign for eigenvalues we get L 3 99 an the performance is transiently ba but oes inee settle quite quickly: 6 5 4 3 - - 4 6 8 4 6 8 Stochastic moels (notation) We will normally write linear stochastic control moels in the form x t Ax t Bu t FV t y t Cx t GW t Here the subscripts serve to remin us of things that epen on time an the vector nature of x an/or y is implicit The stochastic increments V t an W t satisfy

V t W t V t W t V t W t an for s t we have V s V t W s W t We can informally think of V t an W t as gaussian white noises with zero mean an variance It is conventional to refer to V t as process noise an to W t as measurement noise or observation noise It is important to be aware of the fact that stochastic ifferential equations (SDE s) of the type we have written are rigorously speaking a sort of shorthan notation for stochastic integrals There is an important istinction between Itô an Stratonovich stochastic integrals an therefore between Itô an Stratonovich SDE s In control theory one normally works with Itô SDE s an in any case there is a straightforwar recipe for converting a moel between Itô an Stratonovich forms The Stratonovich form is sometimes preferre (especially in physics) because Stratonovich SDE s can be manipulate using stanar calculus For Itô SDE s however one must in general be careful to observe the Itô Rule which says that if x t obeys the Itô SDE x t Ax t Bx tw t then a variable y t relate to x t via y t Ux t evolves accoring to y t Ax t U x B x t U Bx t U x x W t where the secon-erivative term in the square brackets is known as the Itô correction Note that if U is a linear function the Itô correction vanishes an we recover the preiction of normal calculus An important avantage of working with Itô SDE s is that if x t obeys the Itô SDE x t Ax t Bx tw t then x t is uncorrelate with W t This consierably simplifies the computation of statistical moments For example consier the linear SDE moel x t Ax t FV t with x t a scalar an A (the Ornstein-Uhlenbeck moel) We then have x t A x t F V t A x t x t x expat an if y t x t so that y t is the variance of x t y t Ax t F Fx t V t y t A y t F F x t V t A y t F F x t V t A y t F y t expat y t sexp AsF expat y F t sexp As If we assume that x t evolves from a known value x at t then x x an y x an the mean-square uncertainty in x t is x t x t y t x t expatf t sexp As expatf A A F expat The mean-square uncertainty thus has a steay-state value as t x t x t F A In numerical simulations we can simply upate x t accoring to exp At

x t x t Ax t Bu t FV t y t Cx t GW t where V t an W t are inepenent normal ranom variables with variance In Matlab if is a variable with some assigne numerical value Vtsqrt()*rann(); Wtsqrt()*rann(); This simple proceure is known as the Itô-Euler stochastic integration routine which is easy to implement but has the isavantage that it only converges to orer / Higher-orer integrators can be foun in various computer packages (incluing SDE toolboxes for Matlab) an are escribe in textbooks State observers - performance with noise First we consier the case of process noise only Returning to our simple harmonic oscillator with C we can a a noisy force acting on the oscillator by setting Simulating both the plant an the response of the Luenberger observer which we now write as x t x t A LCx t Bu t Ly t we obtain: F 3 5 5-5 - -5-5 5 5 3 35 4 45 5 Here we use the L compute for target eigenvalues It is clear that the observer still tracks the state but with egrae performance ue to the process noise If we try turning up the observer gain by using values compute for target eigenvalues we o much better: 3

5 5-5 - -5-5 5 5 3 35 4 45 5 Next we set F to zero but G The results are: 5 5-5 - -5-5 5 5 3 35 4 45 5 Here the blue shows the performance of a naive velocity estimator x ẏ (note that there is some aliasing) The Luenberger observer oes much better as expecte In orer to bother the Luenberger observer we turn G all the way up to : 4

5 5-5 - -5-5 5 5 3 Now if we try to get greey with this much observation noise by turning up L to the values that woul achieve eigenvalues in the noiseless system 5 5-5 - -5-5 5 5 3 an we see that the estimation of velocity becomes very poor So apparently too much observer gain is a ba thing when there is noise Is there an optimal value of the Luenberger gain? This woul seem to be an especially important question when there is both process noise an measurement noise The Kalman-Bucy filter To answer this sort of question we first have to state how we juge the observer s performance quantitatively It is most common to aopt a minimum-least-squares framework in which our objective is to esign the estimator (metho of generating x t from knowlege of y s with s t) that achieves the lowest possible value of x t x tx t x t T As iscusse in A&M section 74 for the plant an observation moel x t Ax t Bu t FV t y t Cx t GW t we have the following Theorem: (Kalman-Bucy 96) The optimal estimator has the form of a linear observer 5

x t Ax t Bu t L t y t Cx t x x where L t P t C T GG T an P t x t x tx t x t T is the (symmetric an positive-efinite) estimation error covariance matrix that satisfies the following matrix Riccati equation: P t FF T AP t P t A T P t C T GG T CP t P x x T It is important to note that the Kalman filter provies both a point estimate of the evolving system state an a computation of the estimation error covariance matrix - it gives you its best guess an a numerical uncertainty When the system is stationary an if P t converges the observer gain settles to a constant: L PC T GG T FF T AP PA T PC T GG T CP The secon equation is calle the algebraic Riccati equation an may be solve using Matlab s lqe function We see that the essence of Kalman filtering is an optimal choice of the observer gain which may be time-epenent in a way that reflects our evolving egree of confience in our state estimate The general structure is to apply high observer gain when we have large uncertainty an to reuce it when our uncertainty approaches a limiting value set by the process an measurement noises As an example let us compute the Kalman gain for our simple harmonic oscillator example with This results in an a simulation looks as follows: F L 3 G 8 6 5 5-5 - -5-5 5 5 3 35 4 45 5 It is interesting to note that as a consequence of its least-squares optimality the Kalman-Bucy filter achieves what is known as whitening of the innovations process y t Cx t That is if x t is propagate by the Kalman-Bucy filter then y t Cx t becomes a completely ranom signal (Gaussian white noise); roughly we can think that x t becomes goo enough that subtracting Cx t from y t removes all the information from the observe signal The notions of least-squares optimal state estimation the innovations process an whitening all carry over to nonlinear scenarios 6