The optimal filtering of a class of dynamic multiscale systems

Similar documents
The Discrete Kalman Filtering of a Class of Dynamic Multiscale Systems

Unbiased minimum variance estimation for systems with unknown exogenous inputs

Quadrature Prefilters for the Discrete Wavelet Transform. Bruce R. Johnson. James L. Kinsey. Abstract

Two Denoising Methods by Wavelet Transform

An Introduction to Wavelets and some Applications

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Optimal Distributed Lainiotis Filter

Multiresolution analysis & wavelets (quick tutorial)

The Kalman filter is arguably one of the most notable algorithms

IN THE last two decades, multiresolution decomposition

Design of Orthonormal Wavelet Filter Banks Using the Remez Exchange Algorithm

446 SCIENCE IN CHINA (Series F) Vol. 46 introduced in refs. [6, ]. Based on this inequality, we add normalization condition, symmetric conditions and

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems

axioms Construction of Multiwavelets on an Interval Axioms 2013, 2, ; doi: /axioms ISSN

Invariant Scattering Convolution Networks

Lapped Unimodular Transform and Its Factorization

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model

Research Article Weighted Measurement Fusion White Noise Deconvolution Filter with Correlated Noise for Multisensor Stochastic Systems

Multiresolution image processing

A Multiresolution Methodology for Signal-Level Fusion and Data Assimilation with Applications to Remote Sensing

4 Derivations of the Discrete-Time Kalman Filter

Nontechnical introduction to wavelets Continuous wavelet transforms Fourier versus wavelets - examples

Closed-Form Design of Maximally Flat IIR Half-Band Filters

Optimal State Estimators for Linear Systems with Unknown Inputs

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Using the Kalman Filter to Estimate the State of a Maneuvering Aircraft

A New Subspace Identification Method for Open and Closed Loop Data

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon

Embedded Trees: Estimation of Gaussian Processes on Graphs with Cycles

Wavelets and Multiresolution Processing

Tree-Structured Statistical Modeling via Convex Optimization

Kalman Filters with Uncompensated Biases

Wavelet-Based Numerical Homogenization for Scaled Solutions of Linear Matrix Equations

Further Results on Model Structure Validation for Closed Loop System Identification

Networked Sensing, Estimation and Control Systems

Image Denoising using Uniform Curvelet Transform and Complex Gaussian Scale Mixture

Prediction, filtering and smoothing using LSCR: State estimation algorithms with guaranteed confidence sets

NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES*

Lecture Notes 5: Multiresolution Analysis

Square-Root Algorithms of Recursive Least-Squares Wiener Estimators in Linear Discrete-Time Stochastic Systems

State Estimation of Linear and Nonlinear Dynamic Systems

Wavelets and multiresolution representations. Time meets frequency

MODWT Based Time Scale Decomposition Analysis. of BSE and NSE Indexes Financial Time Series

COMPLEX WAVELET TRANSFORM IN SIGNAL AND IMAGE ANALYSIS

PERIODIC KALMAN FILTER: STEADY STATE FROM THE BEGINNING

State Estimation Introduction 2.0 Exact Pseudo-measurments

Closed-form Solutions to the Matrix Equation AX EXF = BY with F in Companion Form

2 Introduction of Discrete-Time Systems

The New Graphic Description of the Haar Wavelet Transform

Proposition 42. Let M be an m n matrix. Then (32) N (M M)=N (M) (33) R(MM )=R(M)

Linear Optimal State Estimation in Systems with Independent Mode Transitions

The Lifting Wavelet Transform for Periodogram Smoothing

Riccati difference equations to non linear extended Kalman filter constraints

EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

Lecture 7 Multiresolution Analysis

Review of Controllability Results of Dynamical System

The likelihood for a state space model

Quantitative Finance II Lecture 10

of Orthogonal Matching Pursuit

REGULARITY AND CONSTRUCTION OF BOUNDARY MULTIWAVELETS

Embedded Trees: Estimation of Gaussian Processes on Graphs with Cycles

Aggregate Set-utility Fusion for Multi-Demand Multi-Supply Systems

Steady State Kalman Filter for Periodic Models: A New Approach. 1 Steady state Kalman filter for periodic models

Modeling Multiscale Differential Pixel Statistics

Design of Image Adaptive Wavelets for Denoising Applications

ECE533 Digital Image Processing. Embedded Zerotree Wavelet Image Codec

Chapter 2 Wiener Filtering

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

Dominant Feature Vectors Based Audio Similarity Measure

IN neural-network training, the most well-known online

Gaussian Message Passing on Linear Models: An Update

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

OPTIMAL CONTROL AND ESTIMATION

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

An Introduction to Filterbank Frames

Generalized Design Approach for Fourth-order Difference Co-array

SINGLE DEGREE OF FREEDOM SYSTEM IDENTIFICATION USING LEAST SQUARES, SUBSPACE AND ERA-OKID IDENTIFICATION ALGORITHMS

Consensus Stabilizability and Exact Consensus Controllability of Multi-agent Linear Systems

Multiresolution Models of Time Series

MULTIPLE-CHANNEL DETECTION IN ACTIVE SENSING. Kaitlyn Beaudet and Douglas Cochran

Optimal Linear Estimation Fusion Part I: Unified Fusion Rules

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

Maths for Signals and Systems Linear Algebra for Engineering Applications

Inferring Galaxy Morphology Through Texture Analysis

RITZ VALUE BOUNDS THAT EXPLOIT QUASI-SPARSITY

A Cross-Associative Neural Network for SVD of Nonsquared Data Matrix in Signal Processing

Title without the persistently exciting c. works must be obtained from the IEE

Prediction-based adaptive control of a class of discrete-time nonlinear systems with nonlinear growth rate

Multiscale Image Transforms

798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 10, OCTOBER 1997

A new method on deterministic construction of the measurement matrix in compressed sensing

H Optimal Nonparametric Density Estimation from Quantized Samples

On Solving Large Algebraic. Riccati Matrix Equations

Applied Linear Algebra in Geoscience Using MATLAB

Statistics 910, #15 1. Kalman Filter

Wavelet Transform And Principal Component Analysis Based Feature Extraction

Multi-Frame Factorization Techniques

The Application of Legendre Multiwavelet Functions in Image Compression

Wavelet methods and null models for spatial pattern analysis

Transcription:

Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 50 he optimal filtering of a class of dynamic multiscale systems PAN Quan, ZHANG Lei, CUI Peiling & ZHANG Hongcai Department of Automatic Control, Northwestern Polytechnical University, Xi an 70072, China Correspondence should be addressed to Pan Quan (email: QuanPan@nwpu.edu.cn) Received October 2, 2003 Abstract his paper discusses the optimal filtering of a class of dynamic multiscale systems (DMS), which are observed independently by several sensors distributed at different resolution spaces. he system is subect to known dynamic system model. he resolution and sampling frequencies of the sensors are supposed to decrease by a factor of two. By using the Haar wavelet transform to link the state nodes at each of the scales within a time block, a discrete-time model of this class of multiscale systems is given, and the conditions for applying Kalman filtering are proven. Based on the linear time-invariant system, the controllability and observability of the system and the stability of the Kalman filtering is studied, and a theorem is given. It is proved that the Kalman filter is stable if only the system is controllable and observable at the finest scale. Finally, a constant-velocity process is used to obtain insight into the efficiencies offered by our model and algorithm. Keywords: dynamic multiscale system, Kalman filtering, wavelet transform. DOI: 0.360/03yf07 In the last two decades, the multiscale autoregressive (MAR) framework [ 6] was developed to model a variety of random processes compactly and estimate them efficiently. he MAR was first motivated by Basseville et al. [], and based on their work Chou et al. proposed the multiscale stochastic models and optimal estimation algorithms for a rich class of processes whitened by wavelet transforms (W) [7 ]. Luettgen [5] and Frakt [6] contributed a lot to the stochastic realization of the MAR. MAR estimates the stationary processes from numerous measurements by multiscale techniques, which aim at less computation than the traditional linear minimum mean-square-error estimation (LMMSE). In this paper, we aim at obtaining the real-time optimal estimation of a class of dynamic multiscale systems (DMS), which are observed by J sensors independently with different resolution. hen J sets of measurements are obtained. Since the observed target is the same, the measurements of the different sensors are correlated and they can be

502 Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 fused to estimate the state of the DMS optimally. Arrange the sensors by their resolution from to J and suppose sensor is of the highest resolution. We have the following equations to characterize the system: x ( k ) = A( k ) x ( k ) + B( k ) w( k ), z ( k ) = C ( k ) x ( k ) + v ( k ), =,2,, J, () (2) where n x ( k ) R x is the state vector to be estimated and z (k ) is its measurement. k denotes the sampling time at scale. A(k ), B(k ) and C(k ) are the system, input and measurement matrices. w(k ) and v(k ) are individually independent. hey are Gaussian white processes with zero mean and covariance matrices Q(k ) and R (k ) respectively. State x belongs to a subspace of x. It is determined by the resolution of sensor. For the [2, mentioned DMS, the well-known Kalman filtering 3] could not be employed directly as the LMMSE algorithm. An important case of the DMS is that the sampling frequencies of the sensors decrease by a factor of two from to J. Fig. illustrates the tree structure of the state nodes for such a DMS. In each time block k, there are 2 J state nodes at scale, 2 J 2 nodes at scale 2,, and sequentially only node at scale J. he real-time optimal estimation of the state nodes should exploit all the measurements at each scale up to time k. Fig.. ree structure of the DMS state nodes within a time block. Hong [4] presented a multiresolution-filtering scheme for such DMSs. W is used to link the state nodes at different scales. As shown in fig. 2, the downward arrows denote the subsampling and the upward arrows denote the upsampling. F 0 and F are the analytic wavelet filters while F 2 and F 3 are the synthetic wavelet filters. Node x (n) is the lowpass output of x (n) and the lost detailed information is preserved in wavelet coefficient y (n). Hong first estimated x (n) with the measurements at scale within k, then he decomposed the estimation to scale 2~J as the prediction of x (n). he updating

he optimal filtering of a class of dynamic multiscale systems 503 Fig. 2. Implementation of the wavelet transforms to the DMS state nodes. was performed at each scale by the local measurements. At last, the local updated estimations were inversely transformed to scale and fused together. It should be noted that in Hong s algorithm, the updating is imposed only on x (n) but not on y (n), which is correlated with state x (n) and measurement z (n). y (n) should also be updated and it would make contributions to the estimation of x (n) through the inverse W. In order to update y (n), the state equation and measurement equation of scale can be decomposed to obtain the equations of y (n). Filtering based on the equations can implement the update. It should be mentioned that, the noises in the equations are colored noises. In ref. [5], the state equation and measurement equation of scale is decomposed by the Haar wavelet to get the equations of x (n), the noises of which are colored. he whiten method of colored state equation is given, and it is very complex. If the wavelet filter length is not 2, the treatment of colored noise will be more complex. It can be seen that updating y (n) to obtain the global optimal algorithm is very difficult. In ref. [6], Hong et al. developed a multiscale Kalman filtering technique for standard state-space model, which is a special case of the DMS when only one sensor is working. Hong et al. decomposed the random signal at several scales and reformed the state-space model by state augmentation. he Kalman filtering of the new model is declared to yield better results than the standard Kalman filtering. But it should be noticed that the comparison is unfair. he Kalman filtering of the new model uses some measurements after time k to estimate the state x k. If we perform the associated Kalman smoothing with the standard state-space model, the same results will be gotten with much less computation than Hong s scheme. In this paper, an optimal estimation algorithm for the mentioned DMS is presented. We employ the Haar wavelet transform to represent the state proection across scales within a time block and generalize the DMS into the standard state space model. hen the classical Kalman filtering would naturally be the LMMSE algorithm. he stochastic controllability and observability of the DMS and the stability of its associated Kalman filter are discussed in the time invariant case. he DMS modeling Refer to fig., suppose state vector x U and U U U J J is a closed www.scichina.com

504 Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 subspace sequence. If x is the linear proection of x from space U to U : x = P, x, where is the linear proection operator, we have P, x = P i x, (3) P = P i P i... i P, =2, 3,, J and P is the identity operator I. where, 2,,2 We want to implement the state space proection from the finest scale to other scales within a time block k. he Haar wavelet whose lowpass filter has only two taps is a natural choice to approximate the linear state proection. Refer to fig., recursively there is ( ) 2 k = i= 0 x ( ) 2 2 x (2 k + i), (4) where J J J k = 2 k, 2 k +,, 2 ( k + ). At the coarsest scale J, the node x J (k) is the linear combination of all the nodes at scale within time block k. Define J J J { } x( k) = col x (2 k), x (2 k + ),, x (2 ( k + ) ), (5) 2 2 M ( m) = 0 Ιn,...,0 Ιn ( 2) Ιn,...,( 2) Ιn 0 Ιn,...,0 Ιn, x x x x x x J 2 m 2 2 (2 m ) (6) J where = 0,,, 2 and n is the identity n x n x matrix. We have m From eq. (2) there is Ι x J x (2 k + m ) = M ( m ) i x ( k). (7) J J J z(2 k + m) = C(2 k + m) M ( m) x( k) + v (2 k + m). (8) Denote { J J J } z ( k) = col z (2 k), z (2 k + ),, z (2 ( k + ) ), (9) J C(2 k) M (0) J C(2 k + ) M () C ( k) =, (0) J J C(2 ( k + ) ) M (2 )

he optimal filtering of a class of dynamic multiscale systems 505 We have { J J J } v ( k) = col v (2 k), v (2 k + ),, v (2 ( k + ) ). () where the covariance of v ( k) is z ( k) = C ( k) x( k) + v ( k), (2) J J J R( k) = diag R (2 k) R(2 k ) R(2 ( k ) ) + +. (3) Denote { } z( k) = col z ( k), z ( k),, z ( k ), (4) J J { J J } Ck ( ) = col C( k), C ( k),, C( k), (5) { } v( k) = col v ( k), v ( k),, v ( k ). (6) J J hus z( k) = C( k) x( k) + v ( k), (7) and the covariance of v ( k) is Rk ( ) diag R( k) R ( k) R( k). = J J (8) hen (7) is the measurement equation of the DMS. he associated state transition equation should be derived to complete the modeling. In time block (k+), recursively by () we have J k + + m x (2 ( ) ) m + = A(2 ( k + ) + m n) i x (2 ( k + ) ) n= m m i i= n= J J J + A(2 ( k + ) + m n) J J i B(2 ( k + ) + i) w(2 ( k + ) + i). (9) Denote J { J J } w( k) = col w(2 ( k + ) ), w(2 ( k + )),, w(2 ( k + 2) 2), (20) m + J A( km, ) = A(2 ( k+ ) + m n), (2) n= www.scichina.com

506 Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 m J J A(2 ( k + ) + m n) B(2 ( k + ) ) i n= m J J A(2 ( k + ) + m n) B(2 ( k + )) i n= Bkm (, ) =. J ( B(2 ( k + ) + m ) ) Ο Ο (22) Eq. (9) can be rewritten as Let J J + + = + + x (2 ( k ) m ) A( k, m ) x (2 ( k ) ) B( k, m ) w ( k). (23) Ο Ο Ak (,0) Ak (,) Ak ( ) Ο Ο =, J Ο Ο Ak (,2 ) hen the state transition equation of the DMS is Bk (,0) Bk (,) Bk ( ) =. J Bk (,2 ) (24) x( k + ) = A( k) x ( k) + B( k) w ( k), (25) where w ( k) is a Gaussian white process uncorrelated with v ( k) and its covariance matrix is J J J Q( k) = diag Q(2 ( k ) ) Q(2 ( k )) Q(2 ( k 2) 2) + + +. (26) Eqs. (25) and (7) make up of the state space model of the DMS: x( k + ) = A( k) x( k) + B( k) w( k), z( k) = C( k) x( k) + v( k). (27) Obviously the model meets the requirements of the standard Kalman filtering, which is then the LMMSE estimation algorithm of the DMS. Denote by x ˆ ( k) the Kalman filtering result of model (27). x ˆ ( k) consists of the LMMSE of those nodes at the finest scale. he LMMSE of the nodes at coarser scales can be directly obtained from x ˆ ( k). We have the following theorem.

he optimal filtering of a class of dynamic multiscale systems 507 heorem. Suppose x ˆ ( k) is the LMMSE of x ( k). hen the LMMSE of node J x (2 k + m ) is M ( m ) i x ˆ ( k). zk col z(), z(2),, z ( ). Since x ( k) and z k are ointly x conditioned on z k can be represented as Proof. Denote = { k } Gaussian distributed, the LMMSE of ( k) where L is a matrix and b is a vector. Denote by ˆ x ( k ) = Ex ( k ) zk = L zk + b, (28) x ( k) = x ( k) x ˆ ( k ) the estimation error. Since LMMSE is unbiased (ref. [2, p. 94, theorem 2.3]) and according to the orthogonal proection theorem (ref. [2, p. 95, theorem 2.5]), there are E x ( k) = 0 and E ( k) x z k = 0. (29) Let J xˆ (2 k + m ) = M ( m ) i x ˆ ( k). It is easy to obtain J J J x (2 k + m ) = x (2 k + m ) xˆ (2 k + m ) = M ( m ) x ( k). (30) So from (29) we have J E (2 k m) M ( m) E ( k) x + = x = 0, (3) J E (2 k + m) k = M ( m) E x z x( k) z k = 0. hus according to the orthogonal proection theorem, the LMMSE of is J J E x (2 ) ˆ (2 ) ( ) ˆ k + m z k = x k + m = M m x( k). J (32) x (2 k + m ) J In fact xˆ (2 k + m ) = L z + b where L = M ( m ) i L and b =M (m ) b. End of Proof. k 2 he stochastic controllability and observability of time invariant DMS he matrices A( k ), Bk ( ) and Ck ( ) in model (27) contain many zero elements. In general the stochastic controllability and observability of the DMS model would be lost. Here we discussed them for the time invariant DMS, where matrices A( i ), B( i) and C ( i ) are constant and the covariance matrices of w ( i) and v ( i) are constants Q and R. he model of the time invariant DMS is (33) www.scichina.com

508 Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 x( k + ) = A x( k) + B w( k), z( k) = C x( k) + v( k), (34) with Ο Ο A B Ο Ο 2 Ο Ο A AB B A =, B Ο =, J 2 Ο Ο A J J 2 2 2 A B A B B { J J } C = col C, C,, C. (35) he covariance matrices of w () i and v () i are constants Q and R. 2. he stochastic controllability Q Q Since is positive, there exists a matrix such that Q Q = Q. So we have =, where Q diag[ Q ] Q Q Q of pair ( A, B q ) is where rank where B n ( ) 2 J x =. Ω = n [3] x. = Q. Let Bq = B Q. he controllability matrix n x Pair (, q ) n Ω x = B, ABq,, A B q, (36) A B is completely controllable if and only if heorem 2. Suppose at the finest scale pair ( A, B q ) is completely controllable, Bq = B Q. hen pair ( A, B q ) is completely controllable if and only if matrix is of full row rank. Proof. From the special structure of A, we have J J J n2 n2 ( n )2 + A Bq A Bq A B q A B A B A B q =. J J J ( n+ )2 ( n+ )2 2 n2 A Bq A Bq A B q J J J n2 + n2 ( n )2 + 2 n q q q AB It is well known that ( A, B q ) is completely controllable if and only if Ω is of full row rank. Let (37)

he optimal filtering of a class of dynamic multiscale systems 509 Ιn Ο Ο x A Ι Ρ = Ο Ο Ο A Ι nx nx. (38) Obviously P is non-singular. ransforming Ω by P Ω =Ρ Ω = J J J 2 ( nx )2 ( nx 2)2 + Bq Ο Ο A Bq ABq A Bq A B q Ο Bq Ο Ο Ο Ο Ο Ο Ο B q Ο Ο Ο Ο u Ω =. d Ω Since pair ( A, B q ) is completely controllable, its controllability matrix n B, ABq,, A x B u u q is of full row rank. So Ω must have full row rank. Ω and d Ω are linearly independent in row. It can be observed that (39) Ω is of full row rank if and only if has full row rank. Because Q is non-singular, it is equivalent that Bq is of full row rank. End of proof. 2.2 he stochastic observability Since R is positive, pair ( C, A) is completely observable if and only if its observability matrix B n Λ x = C, ( CA),, ( CA ) (40) is of full column rank rank( Λ ) = n [3] x. We have the following theorem. heorem 3. Suppose at the finest scale pair and then pair (, C A) ( C, A) is completely observable if and only if is completely observable, rank C2 C = nx, J = 2, rank ( C ) = nx, J > 2. (4) www.scichina.com

50 Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 n Proof. Denote A = Ο Ο A rn,, where { J J ( n )2 + ( n )2 + Arn, = col A, A, } J n2, A n. We have CA = CA rn Ο Ο,. Denote C = will be Pair ( C, A) lu ru C C Ο C lu ru C C Λ = CA r, =. Ο rd Ο Λ Ο CArn, x is completely observable if and only if C Ο lu C C ru. here (42) is of full column rank. Since ( C, A) is completely observable, its observability matrix C C A C A nx, ( ),, ( ) rd rd has full column rank. Each row of the matrix is included in Λ, so Λ is of full column rank if and only if Λ lu C has full column rank. If J = 2, lu 2 2 2 C = C C and it has full column rank when rank C2 C = n into J (2 ) x. If J > 2, lu C has (2 ) n columns in total. Divide it blocks from left to right and each block has n x columns. Subtract the first block from the second block, and with some linear row transforming, the second block will be C C J x C Ο Ο Ο Ο Ο must have full column rank. Since each block of s are not overlapped in row, End of proof. 3 Stability of the Kalman filter Although pair ( C, A, B q ) lu lu. o make lu C be of full rank, C contains a unit and those C has full column rank if and only if rank C n. C ) ( ) = x would be incompletely controllable and observable in general, it will be shown that as long as at the finest scale controllable and observable, the Kalman filter of pair stable. Let us review a lemma and two definitions first [2]. ( C, A, Bq is completely ( C, A, B q ) will be asymptotically

he optimal filtering of a class of dynamic multiscale systems 5 Lemma. Suppose time invariant linear system is x( k + ) = F i x( k) +Γ i w( k), z( k) =Η i x( k) + v( k), (43) where w( i) is Gaussian white process with zero mean and covariance Q. If pair ( F, Η ) is completely detectable and pair ( F, Γ i Q ) is completely stabilizable for Q any with Q i Q = Q, then the system s Kalman filter is asymptotically stable. matrix with matrix ) Definition. Pair ( F, Γ is completely stabilizable if there exists a nonsingular such that F F2 Γ F =, (44) Ο F Γ = 22 Ο ( F, Γ ) being completely controllable and λ i ( F22) <. ) Definition 2. Pair ( F, Η is completely detectable if there exists a nonsingular such that F F Ο =, F2 F 22 [ ] Η = Η Ο (45) with ( F, Η ) being completely observable and λ i ( F22) <. Based on the above lemma and definitions, we have the following theorem. ) heorem 4. If pair ( C, A, Bq is completely controllable and observable, then the Kalman filter of pair Proof. (i) (, q ) ( C, A, B q ) is asymptotically stable. A B is completely stabilizable. For n-dimensional pair F, Γ, there exists nonsingular matrix such that full row rank, where controllability matrix { c c} { c } ( ) i Ω = col Ω, Ο with having F Ω c n ΩF = Γ FΓ F Γ. hen i x = col x, x with x c being completely controllable and x c being uncontrollable [3]. his gives a way to find. It has been shown in the proof of heorem 2 that u d { } Ω =Ρ Ω = col Ω, Ω, where Ω has full row rank and linearly independent of d Ω in row. So u www.scichina.com

52 Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 J J 2 2 2 A A A A 2 ru A A A Ο Ο Ο Ο =Ρ Ρ = =. A J Ο Ο Ο Ο Ο Ο (46) here exists a nonsingular matrix P 2 such that u Ω u u Ι Ο Ω d Ω 2 Ω2 = Ω Ω, P, P = = = d Ο 2 P 2Ω Ο Ο (47) d u d Ω, P where Ω, P is of full row rank. hen 2 has full rank due to Ω and Ω being linearly independent in row. Now we have 2 ru ru A A 2, l A ru, r lu ru Ι Ο Ι Ο A A A P 2 2 A 2 A2 = A. P = = Ο Ο Ο = 2 P (48) Ο Ο 2 Ο Ο Ο Ο Ο Ο Ο A 2 is the canonical controllable decomposition of A. he sub-system matrix of lu the controllable elements is 2 eigenvalues are all zeros. According to Definition, (ii) (, A and that of the uncontrollable elements is Ο, whose ( A, B q ) is completely stabilizable. C A) is completely detectable. For n-dimensional pair ( F, Η) nonsingular matrix such that rank, where observability matrix { o o} F [ Λ ] o u, there exists Λ = Ο with being of full column Λ o n { } Λ F = col Η, ΗF,, ΗF. hen i x = col x, x, where x being completely observable and x o being unobservable. his gives a way to find o. In the proof of heorem 3 we have lu ru Λ Λ l r Λ = = Λ Λ, rd Ο Λ (49) r where Λ has full rank and independent of Λ in column. here exists nonsingular matrix P such that l Ρ Ο Λ Λ Λ Λ Λ Λ Ο Ι l r l r r = i = Ρ = Ο P Λ = Ο, (50)

he optimal filtering of a class of dynamic multiscale systems 53 where r Λ has full column rank. hus u Ο Ο A ru P ru Ρ Ο Ρ Ο Ο Ρ A Ο A d A = A = = A. J Ο Ο 2 P = rd Ο Ι Ο Ι Ο A Ο A J 2 Ο Ο A (5) Ο Ι r Permute Λ in column as Ο Λ2 = Λ = Λ. hen Ι Ο A 2 ru rd Ο Ι Ο A Ο Ι A Ο rd ru Ι Ο Ο A2 Ι Ο = A Ο =. (52) A 2 is the canonical observable decomposition of A. he sub-system matrix of the rd observable elements is A and that of the unobservable elements is Ο, whose eigenvalues are all zeros. According to Definition 2, ( C, A) is completely detectable. Now that ( C, A, B q ) is completely detectable and stabilizable. According to Lemma, the corresponding Kalman filter will be asymptotically stable. End of proof. 4 Examples For verifying the validity of our algorithm, consider the following constant-velocity dynamic system with position-only measurements at three scales. where x (4n+ 4) = Ax (4n+ 3) + Bw(4n+ 3), 3 3 3 3 3 3 z (2 n+ 2 ) = C x (2 n+ 2 ) + v (2 n+ 2 ), =,,3, 2 A=, B =, C = C2 = C3 = [ 0], 0 2 is the sampling rate, x = [displacement, velocity]. w( i) is Gaussian white noise with zero mean and its variance is q. Gaussian white noises v () i, v 2 () i and v 3 () i are with zero-mean and variances, and r, respectively. hey are uncorrelated with w() i. Let r r 2 3 =, q =, r = 6, r 2 = 5, r 3 = 5. www.scichina.com

54 Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 Fig. 3 shows a sequence of the true velocity and the estimated velocity at three scales. Fig. 5 shows a sequence of the true displacement and the estimated displacement at scale 2 and scale 3. Fig. 4 and fig. 6 give the results of Monte Carlo simulation (00 runs). Fig. 4 compares the measurement noise root-mean-square (RMS) with the estimation error RMS at three scales. he noise compression ratio at scale, scale 2 and scale 3 are 7.8037, 4.9755 and 4.258 db, respectively. Fig. 6 shows the RMS estimation error at scale obtained by performing Kalman filter directly and by the algorithm in this paper. For displacement, the noise compression ratio of the algorithm in this paper is 6.2739 db higher than that by performing Kalman filter directly. Fig. 3. rue velocity (dotted) and estimated velocity (solid). (a) Scale ; (b) scale 2; (c) scale 3. Fig. 4. RMS measurement noise (dotted) and RMS estimation error (solid). (a) Scale ; (b) scale 2; (c) scale 3.

he optimal filtering of a class of dynamic multiscale systems 55 Fig. 5. rue displacement value (dotted) and the estimated displacement value (solid). (a) Scale 2; (b) scale 3. Fig. 6. RMS estimation error at scale obtained by performing Kalman filter directly (dotted) and by the algorithm in this paper (solid). (a) Displacement; (b) velocity. o compare the algorithm in this paper with that of Hong [4], a set of simulations is executed with a two-scale constant-velocity dynamic system and the results are shown in table. It can be seen that the estimation accuracy of our algorithm is better than that of Hong s at each scale. able Noise compression ratio of Hong and this paper (db) Parameters Scale Scale 2 q r r 2 L. Hong this paper L. Hong this paper 6.25 3.24 2.6874 7.3087 0.4620 2.435 2 6.25 3.24 2.0948 6.6397 0.3359 2.646 4 6.25 3.24.5579 5.794 0.2250.8897 4 2.25 2.2946 6.7447 0.4088 2.394 9 6.25 2.9739 7.0237 0.6964 3.059 6 9 3.5452 7.8525 0.78 2.909 5 Conclusion and discussion he modeling and optimal estimation of a class of DMS s observed independently by several sensors at different scales, to which the traditional Kalman filtering could not be applied directly, were proposed here. Using the Haar wavelet transform to approximate the state proection between scales, we generalized the DMS into the standard state space model and then the Kalman filtering is employed as the LMMSE algorithm. In the time www.scichina.com

56 Science in China Ser. F Information Sciences 2004 Vol.47 No.4 50 57 invariant case, we prove that as long as the DMS is stochastically completely controllable and observable at the finest scale, its associated Kalman filter will be asymptotically stable. he computation will increase rapidly with the increasing of scale number J. If the dimension of state x is n x, the dimension of augmented state vector x will be n, and the dimension of 2 J x A, B and error covariance matrix P will be 2 J times of their original dimension. Suppose the dimension of measurement matrix is n n x, and then the dimension of measurement matrix J C will be 2 nz. A matrix inverse operation of dimension nz n = z n C 2 J z x, z n where n = will occur when calculating the gain matrix K in Kalman filtering [7] 3, which needs very heavy computation ( n ) z Ο z. Fortunately, a fast algorithm of the DMS estimation was proposed by Zhang [8]. With observation that the measurements z ( k ) are independent of each other inter-scale and intra-scale, ref. [8] employed the sequential Kalman filtering of x ( k), i.e. in each time block k, the estimation of x ( k) is updated by measurement z ( k ) one by one. So the inverse computation of matrix is replaced by the inverse of those small matrices z n z n z. Furthermore, ref. [8] pointed out that the computation could still be reduced much at the finest scale because the associated state transition equation () is available for this scale. he sequential Kalman filtering with all the measurements at scale is equivalent to the fixed-interval Kalman smoothing of the sub-system at scale. hus for the n z n 2 J measurements at scale (about half of the 2 measurements within a time block), the n dimensional DMS is reduced to the dimensional dynamic system. 2 J x Acknowledgements his work was supported by the National Natural Science Foundation of China (Grant No. 6072037). References. Basseville, M., Benveniste, A., Chou, K. et al., Modeling and estimation of multiresolution stochastic processes, IEEE rans. Information heory, 992, 38(2): 766 784. [DOI] 2. Chou, K., Willsky, A. S., Benveniste, A., Multiscale recursive estimation, data fusion, and regularization, IEEE rans. Automatic Control, 994, 39(3): 464 478. [DOI] 3. Chou, K., Willsky, A. S., Nikoukhah, R., Multiscale systems, Kalman filters, and Riccati equations, IEEE rans. on Automatic Control, 994, 39(3): 479 492. [DOI] 4. Daoudi, K., Frakt, A., Willsky, A. S., Multiscale autoregressive models and wavelets, IEEE rans. Information heory, 999, 45(3): 828 845. [DOI] 5. Luettgen, M., Karl, W., Willsky, A. S. et al., Multiscale representations of Markov random fields, IEEE rans. on Signal Processing, 993, 4(2): 3377 3396. [DOI] J n x

he optimal filtering of a class of dynamic multiscale systems 57 6. Frakt, A. B., Internal Multiscale Autoregressive Processes, Stochastic Realization, and Covariance Extension, PhD hesis, Massachusetts Institute of echnology, Aug. 999. 7. Daubechies, I., en lectures on wavelets, CBMS-NSF Series in Appl. Math., Philadelphia, PA: SIAM, 992. 8. Mallat, S., A theory for multiresolution signal decomposition: the wavelet representation, IEEE rans. on PAMI, 989, (7): 674 693. 9. Vetterli, M., Herley, C., Wavelet and filter banks: theory and design, IEEE rans. Signal Processing, 992, 40(9): 2207 2232. [DOI] 0. Jawerth, B., Sweldens, W., An overview of wavelet based multiresolution analyses, SIAM Review, 994, 36(3): 377 42.. Strang, G., Nguyen,., Wavelet and Filter Banks, Cambridge, MA: Wellesley-Cambridge Press, 996. 2. Anderson, B.D.O., Moore, J. B., Optimal Filtering, Englewood Cliffs, N. J.: Prentice-Hall, Inc., 979. 3. Chen Chi-song, Linear System heory and Design, New York: Holt, Rinehart and Winston, 970. 4. Hong Lang, Multiresolutional distributed filtering, IEEE rans. Automatic Control, 994, 39(4): 853 856. [DOI] 5. Hong Lang, Approximating multirate estimation, IEE Proceedings on Vision, Image and Signal Processing, 995, 42(2): 232 236. 6. Hong Lang, Chen Guanrong, Chui, C. K., A filter-bank-based Kalman filtering technique for wavelet estimation and decomposition of random signals, IEEE rans. on Circuits and Systems II: Analog and Digital Signal Processing, 998, 45(2): 237 24. [DOI] 7. Mendal, J. M., Lessons in Digital Estimation heory, Englewood Cliffs, NJ: Prentice-Hall, 987. 8. Zhang Lei, he Optimal Estimation of a Class of Dynamic Systems, PhD hesis, Northwestern Polytechnic University, Xi an, PRC, Oct. 200. www.scichina.com