Optimal State Estimators for Linear Systems with Unknown Inputs

Similar documents
Designing Stable Inverters and State Observers for Switched Linear Systems with Unknown Inputs

c 2005 by Shreyas Sundaram. All rights reserved.

Unbiased minimum variance estimation for systems with unknown exogenous inputs

Observers for Linear Systems with Unknown Inputs

Finite-Time Distributed Consensus in Graphs with Time-Invariant Topologies

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence

Delayed Recursive State and Input Reconstruction

Simultaneous Input and State Estimation for Linear Discrete-time Stochastic Systems with Direct Feedthrough

Information, Covariance and Square-Root Filtering in the Presence of Unknown Inputs 1

FIR Filters for Stationary State Space Signal Models

The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

Linear Regression and Its Applications

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets

The Kalman filter is arguably one of the most notable algorithms

Discrete-Time H Gaussian Filter

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems

Simultaneous state and input estimation with partial information on the inputs

A Unified Filter for Simultaneous Input and State Estimation of Linear Discrete-time Stochastic Systems

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

Riccati difference equations to non linear extended Kalman filter constraints

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Lecture 19 Observability and state estimation

State Estimation of Linear and Nonlinear Dynamic Systems

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

Estimation for Nonlinear Dynamical Systems over Packet-Dropping Networks

On Design of Reduced-Order H Filters for Discrete-Time Systems from Incomplete Measurements

The optimal filtering of a class of dynamic multiscale systems

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models

Auxiliary signal design for failure detection in uncertain systems

ECONOMETRIC METHODS II: TIME SERIES LECTURE NOTES ON THE KALMAN FILTER. The Kalman Filter. We will be concerned with state space systems of the form

Observability and state estimation

IMPULSIVE CONTROL OF DISCRETE-TIME NETWORKED SYSTEMS WITH COMMUNICATION DELAYS. Shumei Mu, Tianguang Chu, and Long Wang

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

FIXED-INTERVAL smoothers (see [1] [10]) can outperform

RECURSIVE ESTIMATION AND KALMAN FILTERING

City, University of London Institutional Repository

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Title without the persistently exciting c. works must be obtained from the IEE

Optimal Polynomial Control for Discrete-Time Systems

Modeling nonlinear systems using multiple piecewise linear equations

Kalman Filters with Uncompensated Biases

Lecture 2 and 3: Controllability of DT-LTI systems

Simultaneous Input and State Estimation of Linear Discrete-Time Stochastic Systems with Input Aggregate Information

Derivation of the Kalman Filter

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Performance of an Adaptive Algorithm for Sinusoidal Disturbance Rejection in High Noise

Modeling and Analysis of Dynamic Systems

The Polynomial Extended Kalman Filter as an Exponential Observer for Nonlinear Discrete-Time Systems

1 Kalman Filter Introduction

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Distributed Consensus and Linear Functional Calculation in Networks: An Observability Perspective

Linear algebra for computational statistics

Linear Systems. Carlo Tomasi

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Answer Key for Exam #2

4 Derivations of the Discrete-Time Kalman Filter

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Error Correction in DFT Codes Subject to Low-Level Quantization Noise

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Multi-Model Adaptive Regulation for a Family of Systems Containing Different Zero Structures

On Identification of Cascade Systems 1

MIT Final Exam Solutions, Spring 2017

A NOVEL OPTIMAL PROBABILITY DENSITY FUNCTION TRACKING FILTER DESIGN 1

ON MODEL SELECTION FOR STATE ESTIMATION FOR NONLINEAR SYSTEMS. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

ECE 275A Homework #3 Solutions

Equivalence of dynamical systems by bisimulation

An LQ R weight selection approach to the discrete generalized H 2 control problem

DS-GA 1002 Lecture notes 10 November 23, Linear models

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

Decentralized LQG Control of Systems with a Broadcast Architecture

State Sensitivity Evaluation within UD based Array Covariance Filters

The Rationale for Second Level Adaptation

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

This can be accomplished by left matrix multiplication as follows: I

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems

Rank-one LMIs and Lyapunov's Inequality. Gjerrit Meinsma 4. Abstract. We describe a new proof of the well-known Lyapunov's matrix inequality about

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

Pathwise Observability Through Arithmetic Progressions and Non-Pathological Sampling

Alternative Characterization of Ergodicity for Doubly Stochastic Chains

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Lecture 7. Econ August 18

Generalized Framework of OKID for Linear State-Space Model Identification

Chapter 6 - Orthogonality

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

A NUMERICAL METHOD TO SOLVE A QUADRATIC CONSTRAINED MAXIMIZATION

Static Output Feedback Stabilisation with H Performance for a Class of Plants

A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1.

ON ENTRY-WISE ORGANIZED FILTERING. E.A. Suzdaleva

Classical Decomposition Model Revisited: I

Performance Analysis of Distributed Tracking with Consensus on Noisy Time-varying Graphs

Subspace Identification

ROBUST PASSIVE OBSERVER-BASED CONTROL FOR A CLASS OF SINGULAR SYSTEMS

Lecture notes: Applied linear algebra Part 1. Version 2

THIS paper studies the input design problem in system identification.

A New Subspace Identification Method for Open and Closed Loop Data

Transcription:

Optimal tate Estimators for Linear ystems with Unknown Inputs hreyas undaram and Christoforos N Hadjicostis Abstract We present a method for constructing linear minimum-variance unbiased state estimators for discrete-time linear stochastic systems with unknown inputs Our design provides a characterization of estimators with delay which eases the established necessary conditions for existence of unbiased estimators with zero-delay A consequence of using delayed estimators is that the noise affecting the system becomes correlated with the estimation error We handle this correlation by increasing the dimension of the estimator appropriately I INTRODUCTION In practice it is often the case that a dynamic system can be modeled as having unknown inputs When the unknown inputs are assumed to have some defined structure (such as being bounded in norm robust filtering techniques have been developed to estimate the state of the system 1 Researchers have also proposed methods to optimally estimate the system state when the unknown inputs are completely unconstrained 2 3 4 5 These latter works revealed that the system must satisfy certain strict conditions in order to reject the unknown inputs Furthermore it was shown in 5 that this decoupling might force the system noise to become correlated with the estimation error (ie the noise behaves as if it is colored In 4 aberi et al showed through a geometric approach that allowing delays in the estimator can relax the necessary conditions for optimal estimation Delayed estimators were also studied in 6 but the design procedure outlined in that paper did not fully utilize the freedom in the estimator gain matrix and ignored the correlation between the noise and error 7 In this paper we study delayed estimators for linear systems with (unconstrained unknown inputs and develop a method to construct optimal linear estimators More specifically our goal is to estimate the entire system state through linear recursive estimators that minimize the mean square estimation error In addition we require that the estimator be unbiased (ie the expected value of the estimation error must be zero Our approach is an extension of earlier work on observers for deterministic linear systems with unknown inputs 8 9 The fact that we incorporate delays in our design procedure allows us to construct optimal estimators for a much larger class of systems than that considered This material is based upon work supported in part by the National cience Foundation under NF Career Award 92696 and NF EPNE Award 224729 and in part by the Air Force Office of cientific Research under URI Award No F4962-1-1-365URI Any opinions findings and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of NF or AFOR The authors are with the Coordinated cience Laboratory and the Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana IL 6181 UA E-mail {ssundarm chadjic@uiucedu by 2 3 5 Our approach is purely algebraic and is more direct than the method in 4 which first transforms the estimator into a dual system and then uses techniques from H 2 -optimal control in order to obtain the estimator parameters Finally our design procedure avoids the problem of colored noise described in 5 by increasing the dimension of the estimator appropriately and makes full use of the freedom in the gain matrix which produces better results than the method in 6 II PRELIMINARIE Consider a discrete-time stochastic linear system of the form x k+1 Ax k + Bu k + w k y k Cx k + Du k + v k (1 with state vector x R n unknown input u R m output y R p and system matrices (A B C D of appropriate dimensions The noise processes w k and v k are assumed to be uncorrelated white and zero mean with covariance matrices Q k and R k respectively Note that we omit known inputs in the above equations for clarity of development We also assume without loss of generality that the matrix B D is full column rank This assumption can always be enforced by an appropriate transformation and renaming of the unknown input signals Note that no assumptions are made on the distribution of the unknown inputs (and that is the reason for distinguishing between u k and the noise v k and w k The response of system (1 over α +1 time units is given by y k C y k+1 CA x k + y k+α CA α {{{{ Θ α Y k:k+α v k v k+1 v k+α {{ V k:k+α D u k CB D u k+1 + CA α 1 B CA α 2 B {{ D u k+α {{ M α U k:k+α w k C w k+1 + (2 CA α 1 CA α 2 {{ C w k+α 1 M wα {{ W k:k+α 1

The matrices Θ α and M α in the above equation can be expressed in a variety of ways We will be using the following identities in our derivations: Θ α M α C Θ α 1 A D Θ α 1 B M α 1 Θα 1 CA α (3 Mα 1 (4 D Cζ α 1 where ζ α 1 A α 1 B A α 2 B B In the rest of the paper we will be using E{ to denote the expected value of a stochastic parameter The notation I r represents the r r identity matrix and ( T indicates matrix transpose We are now in place to proceed with the construction of an estimator for the states in III UNBIAED ETIMATION Consider an estimator of the form z k+1 Az k + K k (Y k:k+α Θ α z k (5 where the nonnegative integer α is the estimator delay and the matrix K k is chosen to (i make the estimator unbiased and (ii minimize the mean square error between z k+1 and x k+1 Note that for α the above estimator is in the form of the optimal estimator for systems with no unknown inputs 1 Using (2 we obtain the estimation error as e k+1 z k+1 x k+1 (A K k Θ α z k + K k Y k:k+α Ax k Bu k w k (A K k Θ α e k + K k V k:k+α + K k M α U k:k+α Bu k + K k M wα W k:k+α 1 w k (6 In order for the estimator to be unbiased (ie E{e k for all k regardless of the values of the unknown inputs we require that K k M α B (7 The solvability of the above condition is given by the following theorem Theorem 1: There exists a matrix K k satisfying (7 if and only if rank M α rank M α 1 m (8 Proof: There exists a K k satisfying (7 if and only if the row space of the matrix R B is in the space spanned by the rows of M α This is equivalent to the condition rank M α R rank Mα Using (4 we get D Mα rank rank Θ R α 1 B M α 1 B By our assumption that the matrix D B has full column rank we get rank M α R m + rank Mα 1 thereby completing the proof The result in the above theorem was also obtained in 6 for the specific case of D Note that (8 is the condition for inversion of the inputs with known initial state as given in 11 If we set α 1 condition (8 becomes D rank m + rank D CB D which is the well known necessary condition for unknowninput observers and unbiased estimators that estimate x k+1 based on y k and y k+1 3 5 This is a fairly strict condition and demonstrates the utility of a delayed estimator When designing such an estimator one can start with α and increase α until a value is found that satisfies (8 An upper bound on α is provided by the following theorem Theorem 2: Let q be the dimension of the nullspace of D Then the delay of the unbiased estimator (5 will not exceed α n q +1 time-steps If (8 is not satisfied for α n q +1 then unbiased estimation of all the states is not possible with an estimator of the form given in (5 The proof of the above theorem is immediately obtained by making use of the following result from 12 which considered the problem of system invertibility Lemma 1: Let q be the dimension of the nullspace of D Then there exists an α satisfying (8 if and only if rank M n q+1 rank M n q m It is apparent from (6 that for α > the error will generally be a function of multiple time-samples of the noise processes w k and v k In other words the noise becomes colored from the perspective of the error Darouach et al studied this situation for α { 1 in 5 and proposed certain strict conditions for the existence of unbiased optimal estimators with dimension n Consequently the estimators proposed in that paper can only be applied to a restricted class of systems In the study of Kalman filters for systems with no unknown inputs it has been shown that colored noise can be handled by increasing the dimension of the estimator 1 In the next section we will apply this technique to construct an optimal estimator for the system in (1 The resulting estimator can be applied to a much larger class of systems than the estimators presented in 5 2 3 IV OPTIMAL ETIMATOR If Theorem 1 is satisfied for α the noise in (6 will not be colored and an optimal estimator of dimension n can be constructed The resulting estimator is called an optimal predictor in 5 and can be found by using the method in that paper To construct an optimal estimator for α> we will rewrite system to obtain a new system given by x k+1 Ā x k + Bu k + B n n k y k C x k + Du k (9 where x k x k W k:k+α 2 wk+α 1 n k v V k+α k:k+α 1 B B T T T In B n I p

A I n I n I n Ā I p I p I p C C I p is invertible In particular P and H can be chosen so that T is orthogonal Now consider the system Ŝ that is obtained ˆx1k by performing the similarity transformation ˆx k ˆx 2k T x k where ˆx 1k represents the first β α states in ˆx k The evolution of the state vector in this system is given by ˆx1k+1 ˆx 2k+1 A11 A 12 A 21 A 22 {{ T ĀT T ˆx1k ˆx 2k + P Θα Bu H k + P Θα B H n n k Note that the state vector in this new system has dimension n α(n + p for α > From (2 the output of this augmented system over α +1 time-steps is given by Y k:k+α C I p CA C I p x k CA α 1 CA α 2 C I p CA α CA α 1 CA {{ Θ α + M α U k:k+α + n C I k p {{ (1 M nα We notice that a portion of this equation is independent of the noise vector n k and so we can obtain some linear functions of the state vector x k directly from the output We will use the following definition and theorem to characterize these linear functions Definition 1 (Rank-d Linear Functional: Let Γ be a d n matrix with rank d Then the quantity Γ x k will be termed a rank-d linear functional of the state vector x k Theorem 3: For system (9 with response over α+1 timesteps given by (1 let β α be the dimension of the left nullspace of M α 1 Then it is possible to obtain a rankβ α linear functional of x k directly from the output of the system Proof: Let P be a matrix whose rows form a basis for the left nullspace of M α 1 Note that the dimension of P is equal to β α αp rankm α 1 (11 Define the matrix P P where the zero matrix has p columns Using (4 we see that PM α and leftmultiplying (1 by P we obtain PY k:k+α P Θ α x k From the definition of Θ α in (1 it is apparent that P Θ α is full row rank (with rank β α and so the theorem is proved To estimate the remaining states of x k we choose a matrix H such that the matrix P Θα T (12 H From Theorem 3 we see that ˆx 1k is directly available from the output of the system over α +1 time-steps since ˆx 1k P Θ α x k PY k:k+α Therefore we have to construct an estimator for ˆx 2k which evolves according to the equation ˆx 2k+1 A 21ˆx 1k + A 22ˆx 2k + H Bu k + H B n n k A 22ˆx 2k + A 21 PY k:k+α + H Bu k + H B n n k (13 P Let U be a matrix such that is square and invertible U Define G U J I p P G (14 Using (3 (4 and (1 we note that J M α J M GM nα α GM nα J Θ α T T Iβα L 1 L 2 where L1 L 2 G Θα T T (15 Left-multiplying (1 by J we get P Iβα Y G k:k+α ˆx L 1 L k + U 2 GM k:k+α α + n GM k nα Weseethatthefirstβ α rows of the above equation do not contain any information about ˆx 2k The remaining rows can be written as (G L 1 P Y k:k+α L 2ˆx 2k + GM α U k:k+α + GM nα n k (16 To estimate ˆx 2k we use (13 and (16 to construct an estimator of the form z k+1 A 22 z k + A 21 PY k:k+α + K k ((G L 1 P Y k:k+α L 2 z k (17 where K k is chosen to (i make the estimator unbiased and (ii minimize the mean square error between z k+1 and

ˆx 2k+1 Using (13 and (16 we find the error between the two quantities to be e k+1 z k+1 ˆx 2k+1 (A 22 K k L 2 e k + ( K k GM nα H B n nk + ( K k GM α H B U k:k+α (18 Note that the noise in (18 is no longer colored from the perspective of the error allowing us to construct an optimal estimator For an unbiased estimator (ie E{e k we require that K k GM α H B (19 The solvability of the above condition is given by the following theorem Theorem 4: There exists a matrix K k satisfying (19 if and only if rank M α rank M α 1 m Proof: Arguing as in the proof of Theorem 1 there exists a K k satisfying (19 if and only if the row space of the matrix H B is in the space spanned by the rows of GM α This is equivalent to the condition GMα rank rank GM α (2 ince PM α we use (4 and (14 to get PM α GMα rank rank GM α Mα rank D rank Θ α 1 B Mα 1 H B Left-multiplying the second row in the above expression by P and inserting the result into the matrix we obtain D GMα rank rank Θ α 1 B Mα 1 P Θ α 1 B H B ince P P wehave P Θα 1 P Θ α This along with the definition of T in (12 and our assumption that the matrix D B has full column rank gives us D GMα rank rank Θ α 1 B Mα 1 B m + rank M α 1 Finally we note that rankgm α rankj M α rankm α in (2 and this concludes the proof The condition in the above theorem is the same as the one in Theorem 1 This means that the upper bound on the delay provided by Theorem 2 also applies to the reduced order estimator in (17 To solve (19 we note that if the condition in Theorem 4 is satisfied then the rank of GM α will also be m+rankm α 1 Let N be a matrix whose rows form a basis for the left nullspace of the last αm columns of GM α In particular we can assume without loss of generality that N satisfies NGM α (21 I m Note that GM α has (α +1p β α rows and the last αm columns of the matrix have the same rank as M α 1 Using the expression for β α given in (11 the number of rows in N is given by dim (N (α +1p β α rankm α 1 p From (19 we see that K k must be of the form K k K k N (22 for some K k K1k K2k where K 1k has p m columns and K 2k has m columns Equation (19 then becomes K1k K2k H I m B (23 from which it is obvious that K 2k H B and K 1k is a free matrix Returning to equation (18 define Φ1 Ψ1 NL 2 NGM nα (24 Φ 2 Ψ 2 where Φ 2 and Ψ 2 have m rows each ubstituting (22 into (18 the expression for the estimation error can be written as ( (A22 H BΦ 2 K1k Φ 1 e k+1 Denoting + e k (H ( BΨ2 B n + K1k Ψ 1 n k (25 A A 22 H BΦ 2 B H ( BΨ2 B n Π k E{n k n T Qk+α 1 k (26 R k+α we use (25 to obtain the error covariance matrix Σ k+1 E{e k+1 e T k+1 AΣ k A T + BΠ k B T K ( 1k AΣk Φ T 1 BΠ k Ψ T T 1 ( AΣ k Φ T 1 BΠ k Ψ T 1 KT 1k + K ( 1k Φ1 Σ k Φ T 1 +Ψ 1 Π k Ψ T 1 KT 1k (27 The matrix K 1k must be chosen to minimize the mean square error (or equivalently the trace of the error covariance matrix Recall from the definition of K k in (22 that K1k will have p m columns This means that there will be no freedom to minimize the mean square error if the number of outputs is equal to the number of unknown inputs If p>m we take the gradient of (27 with respect to K 1k and set it equal to zero to obtain the optimal gain as K 1k ( AΣ k Φ T 1 BΠ k Ψ T ( 1 Φ1 Σ k Φ T 1 +Ψ 1 Π k Ψ T 1 1 (28

ubstituting this expression into (27 we get the optimal covariance update equation to be Σ k+1 AΣ k A T + BΠ k B T ( AΣ k Φ T 1 BΠ k Ψ T ( 1 Φ1 Σ k Φ T 1 +Ψ 1 Π k Ψ T 1 1 ( AΣk Φ T 1 BΠ k Ψ T T 1 (29 Note that equations (28 and (29 require the calculation of a matrix inverse If this inverse fails to exist it can be replaced with a pseudo-inverse 1 We can now obtain an estimate of the original system states as follows Using (22 and (23 we get the estimator gain in (17 to be B K k K1k H N (3 where K 1k is specified in (28 The estimator is initialized with initial state z HE{ x which will ensure that e will have an expected value of zero The estimate of the original state vector in (1 is given by x k I n T T ˆx k I n T T PYk:k+α z k (31 and the estimation error for the state vector ˆx k is given by PYk+1:k+α+1 ˆx1k+1 ê k+1 z k+1 ˆx 2k+1 e k+1 where e k+1 is defined in (18 The error for the augmented system in (9 (with state vector x k can then be obtained as ē k+1 T T ê k+1 H T e k+1 (32 and the error covariance matrix of the original state vector is given by Σ xk+1 I n E{ē k+1 ē T In k+1 I n H T In Σ k+1 H (33 where Σ k+1 is given by (29 The trace of the above covariance matrix will be the mean square estimation error for the state vector in the original system The initial error covariance matrix for the update equation (29 can also be obtained from (32 as Σ H Σ H T where Σ E{ē ē T is the initial error covariance matrix for the augmented system in (9 Remark 1: Note that the estimator uses the output of the system up to time-step k + α in order to estimate x k Inthe Kalman filter literature such estimators are known as fixedlag smoothers 1 where a lag of α is used to obtain a better estimate of the state x k α (ie to reduce the mean square estimation error In contrast for the systems considered in this paper (which have unknown inputs delays are generally necessary in order to ensure that the estimate is unbiased V DEIGN PROCEDURE We now summarize the steps that can be used in designing a delayed estimator for the system given in (1 1 Find the smallest α such that rankm α rank M α 1 m If the condition is not satisfied for α n q +1 (where q is the dimension of the nullspace of D it is not possible to obtain an unbiased estimate of the entire system state 2 Construct the augmented system given in (9 3 Choose P P and H so that T P Θα H is orthogonal and P is a basis for the left nullspace of M α 1 Also choose U and form the matrix G given in (14 4 Find the rank-p matrix N satisfying (21 5 Form the matrices L 1 L 2 Φ 1 Φ 2 and Ψ1 Ψ 2 from (15 and (24 6 At each time-step k calculate K 1k using equation (28 and update the error covariance matrix using equation (29 7 Use (3 to obtain the estimator gain K k 8 The final estimator is given by equation (17 The estimate of the original state vector is given by (31 with error covariance matrix given by (33 VI EXAMPLE A Example 1 Consider the system given by the matrices 1 1 1 1 A B Q 2 1 k 1I 2 1 C D R 1 1 1 k 4I 2 It is found that condition (8 holds for α 2 so our estimator must have a minimum delay of two time-steps The state and noise vectors in the augmented system will be x k x T k wk T vk T vk+1 T T wk+1 nk v k+2 Using Theorem 1 we find β α 2and choose 64 34 34 1 1 P U 36 34 34 1 and 1 4 13 85 13 15 15 4 64 3 64 3 3 H 4 13 15 13 85 15 4 13 15 13 15 85 1 The matrix N in (21 is then given by N 9 1 1 5 We use (15 to obtain the matrices L 1 and L 2 and from equation (24 we get Φ1 1 43 39 57 123 1 73 19 27 73 Φ 2

Ψ1 Ψ 2 N 1 1 1 1 1 1 1 ince Φ 2 and Ψ 2 have m 2rows Φ 1 and Ψ 1 are empty matrices Therefore in this example we have no freedom to minimize the trace of the covariance matrix (ie all of the freedom in the estimator gain is used to obtain an unbiased estimate Using (3 (and the fact that K1k is the empty matrix we get the gain to be K k 118 4436 4436 4888 4221 4221 118 4436 4436 118 4436 4436 The final estimator is given by (17 and an estimate of the original system states can be obtained via equation (31 To test this estimator the system is initialized with a random state with zero mean and covariance matrix I 2 A set of sinusoidal signals is used as the unknown inputs to the system The estimator is initialized with an initial state of zero and the resulting estimates are shown in Fig 1 Note that the estimated state should technically be delayed by two time-steps (since our estimator has a delay of α 2 but we have shifted the estimate forward for purposes of comparison After allowing equation (29 to run for several time-steps we find the error covariance matrix for the original system state (equation (33 converges to Σ x which has a trace of 1556 1156 32 32 4 B Example 2 Consider the following example from 6 I2 A diag(1 2 3 9 B Q I k 1I 4 2 1 C D R 1 1 1 1 k 1I 2 Once again we find that (8 is satisfied for α 2 Following the procedure outlined in this paper we find that the error covariance matrix converges to a matrix with a trace of 2824 In contrast the trace of the error covariance matrix for the estimator constructed in 6 converges to 3778 which is approximately 34% worse than the performance achieved by our estimator This discrepancy arises from the fact that the procedure in 6 ignores the correlation between the error and the noise VII UMMARY We have provided a characterization of linear minimumvariance unbiased estimators for linear systems with unknown inputs and have provided a design procedure to obtain the estimator parameters We have shown that it x 1 x 2 6 4 2 2 4 5 1 15 2 25 3 35 4 45 5 time step 2 1 1 Actual state Estimated state 2 5 1 15 2 25 3 35 4 45 5 time step Fig 1 imulation of system and estimator will generally be necessary to use delayed measurements in order to obtain an unbiased estimate and that these delays cause the system noise to become colored We increased the dimension of our estimator in order to handle this colored noise The resulting estimator can be applied to a more general class of systems than those currently studied in the literature REFERENCE 1 O R Moheimani A V avkin and I R Peterson Robust filtering prediction smoothing and observability of uncertain systems IEEE Transactions on Circuits and ystems-i: Fundamental Theory and Applications vol 45 no 4 pp 446 457 Apr 1998 2 P K Kitanidis Unbiased minimum-variance linear state estimation Automatica vol 23 no 6 pp 775 778 Nov 1987 3 M Hou and R J Patton Optimal filtering for systems with unknown inputs IEEE Transactions on Automatic Control vol 43 no 3 pp 445 449 Mar 1998 4 A aberi A A toorvogel and P annuti Exact almost and optimal input decoupled (delayed observers International Journal of Control vol 73 no 7 pp 552 581 May 2 5 M Darouach M Zasadzinski and M Boutayeb Extension of minimum variance estimation for systems with unknown inputs Automatica vol 39 no 5 pp 867 876 May 23 6 J Jin and M-J Tahk Time-delayed state estimator for linear systems with unknown inputs International Journal of Control Automation and ystems vol 3 no 1 pp 117 121 Mar 25 7 undaram and C N Hadjicostis Comments on Time-delayed state estimator for linear systems with unknown inputs International Journal of Control Automation and ystems vol 3 no 4 pp 646 647 Dec 25 8 M E Valcher tate observers for discrete-time linear systems with unknown inputs IEEE Transactions on Automatic Control vol 44 no 2 pp 397 41 Feb 1999 9 undaram and C N Hadjicostis On delayed observers for linear systems with unknown inputs in Proceedings of the 44th IEEE Conference on Decision and Control and European Control Conference 25 Dec 25 pp 721 7215 1 B D O Anderson and J B Moore Optimal Filtering Prentice-Hall Inc Englewood Cliffs New Jersey 1979 11 M K ain and J L Massey Invertibility of linear time-invariant dynamical systems IEEE Transactions on Automatic Control vol AC-14 no 2 pp 141 149 Apr 1969 12 A Willsky On the invertibility of linear systems IEEE Transactions on Automatic Control vol 19 no 2 pp 272 274 June 1974