AN ADDENDUM TO THE PROBLEM OF STOCHASTIC OBSERVABILITY

Similar documents
Generalized Riccati Equations Arising in Stochastic Games

On the linear quadratic optimization problems and associated Riccati equations for systems modeled by Ito linear differential equations

On matrix equations X ± A X 2 A = I

Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition

CONTINUITY OF SOLUTIONS OF RICCATI EQUATIONS FOR THE DISCRETE TIME JLQP

Linear Filtering of general Gaussian processes

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

SEMI-INNER PRODUCTS AND THE NUMERICAL RADIUS OF BOUNDED LINEAR OPERATORS IN HILBERT SPACES

ON ADAPTIVE CONTROL FOR THE CONTINUOUS TIME-VARYING JLQG PROBLEM

On continuous time contract theory

Stochastic modelling of epidemic spread

SPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction

A Concise Course on Stochastic Partial Differential Equations

Delay-dependent Stability Analysis for Markovian Jump Systems with Interval Time-varying-delays

Verona Course April Lecture 1. Review of probability

ODE Final exam - Solutions

Control, Stabilization and Numerics for Partial Differential Equations

Applied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system

Gramians based model reduction for hybrid switched systems

Deterministic Dynamic Programming

SMSTC (2007/08) Probability.

MARKOVIANITY OF A SUBSET OF COMPONENTS OF A MARKOV PROCESS

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

6.241 Dynamic Systems and Control

A note on linear differential equations with periodic coefficients.

A probabilistic proof of Perron s theorem arxiv: v1 [math.pr] 16 Jan 2018

OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN

On Convergence of Nonlinear Active Disturbance Rejection for SISO Systems

Observer design for a general class of triangular systems

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

Chapter 2 Event-Triggered Sampling

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Giaccardi s Inequality for Convex-Concave Antisymmetric Functions and Applications

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

w T 1 w T 2. w T n 0 if i j 1 if i = j

Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term

EL2520 Control Theory and Practice

Stability of Stochastic Differential Equations

Final Exam Solutions

SDE Coefficients. March 4, 2008

Block companion matrices, discrete-time block diagonal stability and polynomial matrices

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Harnack Inequalities and Applications for Stochastic Equations

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Stochastic Stability of Jump Linear Systems

Trajectory Tracking Control of Bimodal Piecewise Affine Systems

Matrix Solutions to Linear Systems of ODEs

ON THE MOMENTS OF ITERATED TAIL

Absolute value equations

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

14 - Gaussian Stochastic Processes

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

On Controllability of Linear Systems 1

COMPUTER-AIDED MODELING AND SIMULATION OF ELECTRICAL CIRCUITS WITH α-stable NOISE

EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner

A note on adiabatic theorem for Markov chains

Stochastic Shortest Path Problems

Chap. 3. Controlled Systems, Controllability

Chapter Introduction. A. Bensoussan

The Important State Coordinates of a Nonlinear System

Linear Hyperbolic Systems

CONTROLLABILITY OF NONLINEAR SYSTEMS WITH DELAYS IN BOTH STATE AND CONTROL VARIABLES

Stability of an abstract wave equation with delay and a Kelvin Voigt damping

ELA

On an Output Feedback Finite-Time Stabilization Problem

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

Higher Order Averaging : periodic solutions, linear systems and an application

A Delay-dependent Condition for the Exponential Stability of Switched Linear Systems with Time-varying Delay

Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming

Math Ordinary Differential Equations

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

Switching, sparse and averaged control

Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching

Asymptotics for posterior hazards

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Stochastic modelling of epidemic spread

Linear Algebra Solutions 1

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Noncooperative continuous-time Markov games

ROBUST NULL CONTROLLABILITY FOR HEAT EQUATIONS WITH UNKNOWN SWITCHING CONTROL MODE

Properties of an infinite dimensional EDS system : the Muller s ratchet

STABILIZATION OF LINEAR SYSTEMS VIA DELAYED STATE FEEDBACK CONTROLLER. El-Kébir Boukas. N. K. M Sirdi. Received December 2007; accepted February 2008

ASYMPTOTIC STABILITY IN THE DISTRIBUTION OF NONLINEAR STOCHASTIC SYSTEMS WITH SEMI-MARKOVIAN SWITCHING

Stochastic process. X, a series of random variables indexed by t

Linear Differential Equations. Problems

Risk-Averse Control of Partially Observable Markov Systems. Andrzej Ruszczyński. Workshop on Dynamic Multivariate Programming Vienna, March 2018

An Introduction to Malliavin calculus and its applications

Positive Markov Jump Linear Systems (PMJLS) with applications

Systemic Risk and Stochastic Games with Delay

Local null controllability of the N-dimensional Navier-Stokes system with N-1 scalar controls in an arbitrary control domain

MEAN SQUARE SOLUTIONS OF SECOND-ORDER RANDOM DIFFERENTIAL EQUATIONS BY USING HOMOTOPY ANALYSIS METHOD

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

Lecture: Adaptive Filtering

LINEAR SYSTEMS OF GENERALIZED ORDINARY DIFFERENTIAL EQUATIONS

Lecture 4: Numerical solution of ordinary differential equations

Quitting games - An Example

Deterministic Kalman Filtering on Semi-infinite Interval

Stability Theory for Nonnegative and Compartmental Dynamical Systems with Time Delay

Transcription:

AN ADDENDUM TO THE PROBLEM OF STOCHASTIC OBSERVABILITY V. Dragan and T. Morozan Institute of Mathematics of the Romanian Academy, P.O.Box. 1-764, RO-77, Bucharest, Romania vdragan@stoilow.imar.ro, tmorozan@stoilow.imar.ro Abstract In this paper we introduce a concept of stochastic observability for a class of linear stochastic systems subjected both to multiplicative white noise and Markovian jumping. The definition of stochastic observability adopted here extends to this framework the definition of uniform observability of a time varying linear deterministic system. By several examples we show that the concept of stochastic observability introduced in this paper is less restrictive than those introduced in some previous papers. Also we show that the concept of stochastic observability introduced in this paper, does not imply always the stochastic detectability as it would be expected. Keywords. Linear stochastic systems, Markovian jumping, multiplicative white noise, stochastic observability. 1 Introduction Both the observability property and controlability property of a linear system plaied a crucial role in solving a wide class of control problems. Here we recall that starting with the pioneer paper of Kalman [7], the observability and controlability properties provide sufficient conditions guaranteeing the existence of the stabilizing solution of a matrix Riccati differential equation which is connected with the linear quadratic problem and filtering problem [12, 13]. In stochastic framework the concept of stochastic observability was introduced in order to provide conditions which guarantee the existence of the stabilizing solutions of the matrix Riccati differential equations of stochastic control [9]. For the linear systems subjected to Markovian jumping some definitions of stochastic observability were introduced in [8, 6]. In this paper we introduce the concept of stochastic observability for a class of linear stochastic systems subjected both to multiplicative white noise and Markovian jumping. The definition of stochastic observability adopted here extends to this framework the definition of uniform observability of a time varying linear deterministic system [7, 1]. By several examples we show that the concept of stochastic observability introduced in this paper is less restrictive than those introduced by [8] and [6]. Also we show that the concept of stochastic observability introduced in this paper, does not imply always the stochastic detectability as it would be expected. Finally we show that 1

the stochastic observability introduced here guarantees the positivity of the observability Gramian (if it exists) and additionaly any positive solution of corresponding Riccati equation is stabilizing as it happens in deterministic case. 2 Stochastic observability Consider the system: dx(t) = A (η(t))x(t)dt + A k (η(t))x(t)dw k (t) (2.1) y(t) = C (η(t))x(t) (2.2) x(t) R n,y R p,w(t) =(w 1 (t)... w r (t)),t isastandard Wiener process on a given probability space (Ω, F, P); η(t), t isaright continuous homogenous Markov chain, with the state space set D = {1, 2,..., d} and the probability transition matrix P (t) =e Qt,t >,Q=[q ij ] with d j=1 q ij =,i D,q ij,i j (see[2]). Throughout this paper assume that {w(t)} t and {η(t)} t are independent stochastic processes, and P{η() = i} > for all i D.Foreach t and x R n,x(t, t,x ) stands for the solution of (2.1) with initial condition x(t,t,x )=x.(for precise definition of the solution see [5, 11]). Let Φ(t, t )bethe fundamental matrix solution of (2.1). That is the j-th column of Φ(t, t ) is x(t, t,e j ), j =1, 2,..., n, where e j =(... 1... ) being vector of canonic basis of R n. Now we are able to introduce the definition of stochastic observability. Definition 2.1 We say that the system (2.1)-(2.2) is stochasticaly observable, or the triple (C,A,A 1,..., A r ; Q) isobservable if there exist β>,τ > such that +τ E[ Φ (s, t)c(η(s))c ) (η(s))φ(s, t)ds η(t) =i] βi n, (2.3) t i D, ( )t, E[ η(t) = i] being the conditional expectation with respect to the event η(t) =i. Sometimes we shall write (C, A; Q) isobservable instead of (C,A,A 1,..., A r ; Q) isobservable. Remark 2.1 a) In the particular case D = {1} and A k =1 k r the inequality (2.3) reduces to the well known definition of uniform observability for a time varying linear deterministic system (see e.g. [7, 1]). 2

b) If D = {1} the equations (2.1)-(2.2) become dx(t) = A x(t)dt + A k x(t)dw k (t) (2.4) y(t) = C x(t). (2.5) In this case (2.3) is +τ E[ Φ (s, t)cc Φ(s, t)ds] βi n (2.6) t for all t, Φ(s, t) being now the fundamental matrix solution of (2.4). c) In the case d 2,A k (i) =, 1 k r, i D, (2.1)-(2.2) reduce to ẋ(t) = A (η(t)) (2.7) y(t) = C (η(t))x(t) (2.8) and in this case if (2.3) holds with Φ(s, t) standing for the fundamental matrix solution of (2.7) we say that the triple (C,A ; Q) isobservable. 3 Some auxiliary results 3.1 Liapunov type operators and exponential stability Let S n R n n be the space of symmetric matrices and S d n = S n S n... S n (d factors). Let L : S d n S d n be defined by (LX)(i) =A (i)x(i)+x(i)a (i)+ A k (i)x(i)a k(i)+ d q ij X(j) (3.9) ( ) i D,X =(X(1)... X(d)) S d n. The operator L will be termed as the Liapunov type operator defined by the system (A,A 1,..., A r ; Q). It is easy to see that the adjoint operator L is given by (L X)(i) =A (i)x(i)+x(i)a (i)+ A k(i)x(i)a k (i)+ j=1 d q ij X(j) (3.1) L : S d n S d n is the linear operator obtained from (3.1) taking A k (i) =,1 k r, i D. Consider the linear differential equation on S d n d S(t) =LS(t) (3.11) dt and set e Lt the linear evolution operator defined on Sn d by (3.3). 3 j=1

The following result was proved in [3] in a more general setting. Proposition 3.1 a) e Lt X e L t X, e L t X e L t X, ( ) X =(X(1) X(2)...X(d)) S d n,x(i),i D,t. b) [e L (t s) X](i) = E[Φ (t, s)x(η(t))φ(t, s) η(s) = i] ( ), t s >,i cald, X = (X(1) X(2)... X(d)) S d n. Definition 3.1 We say that the zero solution of the equation (2.1) is mean square exponentially stable (MSES for shortness), or that the system (A,A 1,...A r ; Q)isstable if there exist α>,β > such that E[ x(t) 2 ] βe αt x() 2, ( )t,x() R n. The following result provides necessary and sufficient conditions for the MSES of the zero solution of (2.1). Theorem 3.1 The following are equivalent: (i) The solution x(t) =of the equation (2.1) is MSES. (ii) lim t E[ x(t) 2 ]=for any solution x(t) of (2.1). (iii) The eigenvalues of the operator L are located in the half plane Re λ <. (iv) There exists H =(H(1) H(2)... H(d)) S d n with H(i) >,i D, such that the equation LX + H = has a solution X =(X(1)... X(d)),X(i) >,i D. (v) There exists H as before such that the equation L X + H = has a solution X =(X(1)... X(d)),X(i) >,i D. (vi) There exists X =(X(1)... X(d)),X(i) > such that L X<. For detailed proof see [3]. 3.2 Stochastic detectability Definition 3.2 We say that the system (2.1)-(2.2) is stochasticaly detectable, or that the triple (C, A; Q) isdetectable if there exist L =(L(1) L(2)... L(d)), L(i) R n p such that the zero solution of the equation dx(t) =(A (η(t)) + L(η(t))C (η(t)))x(t)dt + is MSES. L will be termed stabilizing injection. 4 A k (η(t))x(t)dw k (t)

Proposition 3.2 The following are equivalent: (i) The triple (C, A; Q) is detectable. (ii) The system of linear equations A (i)x(i)+x(i)a (i)+c(i)γ (i)+γ(i)c (i)+ d A k(i)x(i)a k (i)+ q ij X(j)+I n = (3.12) has a solution X =(X(1)... X(d)) S d n, Γ=(Γ(1)... Γ(d)), X(i) >, Γ(i) R n p,i D. (iii) The linear matrix inequality j=1 A (i)x(i)+x(i)a (i)+c(i)γ (i)+γ(i)c (i)+ dq ij X(j) < (3.13) A k(i)x(i)a k (i)+ j=1 has a solution X =(X(1) X(2)... X(d)) S d n, Γ=(Γ(1) Γ(2)... Γ(d)), X(i) >, Γ(i) R n p,i D. Moreover, if (X, Γ),X > is a solution of (3.4), then L =(L(1) L(2)... L(d)), L(i) =X 1 (i)γ(i) provides a stabilizing injection. 4 Main results Based on Proposition 3.1 (b) we obtain: Proposition 4.1 The following are equivalent: (i) The system (2.1)-(2.2) is stochastically observable. (ii) There exists τ> such that τ e L s Cds > (4.14) C =( C(1)... C(d)), C(i) =C (i)c (i),i D. (iii) There exists τ> such that X (τ) >, where X (t) is the solution of the problem with initial value d dt X (t) =L X (t)+ C, X() =. (4.15) Proof. (i) (ii) follows from Proposition 3.1 (b). Since X (t) = e L (t s) Cds = e L s Cds, t it follows that (iii) (ii). The proof is complete. 5

Further from Porposition 4.1 and Proposition 3.1 (a) we obtain: Proposition 4.2 The following hold: (i) If for each i D, the pair (C (i),a (i)) is observable (in deterministic sense) then the triple (C,A ; Q) is observable. (ii) If (C,A ; Q) is observable, then (C, A; Q) is observable. Proposition 4.3 Let X (t) be the solution of the problem with initial value (4.2). If there exists τ>such that X (τ) > then X (t) > for all t>. Proof. For each t>, we write the representation X (t) =(X (t, 1), X (t, 2),..., X (t, d)) = e L (t s) Cds. Since e L (t s) : Sn d Sn d is a positive operator, we deduce that X (t), for all t. Moreover if t τ we have X (t) X (τ), therefore if X (τ) >, we have X (t) > for all t τ. It remains to show that X (t) >, <t<τ. To this end we show that detx (t, i) >, <t<τ, i D. Indeed, since detx (t, i) =det { el (t s) Cds)(i) },we deduce that t detx (t, i) isananalytic function. The set of its zeros on [,τ] has no accumulation point. In this way it will follow that there exist τ 1 > such that detx (t, i) > for all t (,τ 1 ]. Invoking again the monotonicity of the function t X (t) weconclude that X (t) > for all t τ 1, and the proof ends. Remark 4.1 From Proposition 4.1 and Proposition 4.3 it follows that the stochastic observability for the system (2.1)-(2.2) may be checked by using a numerical procedure to verify the positivity of the solution X (t) through an enough long interval of time. Proposition 4.4 The triple (C, A; Q) is observable if and only if does not exist τ>,i D and x = such that E [ y(t,,x ) 2 η() = i ] = ( ) t [,τ] where y(t,,x )=C (η(t))x(t,,x ), x(t,,x ) being the solution of (2.1) having the initial condition x(,,x )=x. The next result provide a sufficient condition assuring the stochastic observability. Its proof may be found in [1]. Proposition 4.5 The triple (C, A; Q) is observable if for every i D,rank M(i) =n, where M(i) =[C(i),A (i)c(i),...,(a (i)) n 1 C(i), q i1 C(1),...,q id C(d),A 1(i)C(i),...,A r(i)c(i)]. In the following examples, the stochastic observability used in this paper is compared with other types of stochastic observability, for example the one introduced in [6] and [8]. We also 6

show that the stochastic observability used in this paper doesn t imply always the stochastic detectability as we would expected. Example 1 The case of a system subjected to Markovian jumping with d =2,n=2,p=1. Take [ ] α A (1) = A (2) = α [ ] q q C (1) =[1 ],C (2) =[ 1],Q=,α R,q >. q q It is obvious that the pairs (C (1),A (1)), (C (2),A (2)) are not observable in the deterministic sense. Therefore this system is not stochastically observable, in the sense of [8, 6]. We shall show that this system is stochastically observable in the sense of Definition 2.1. To this end we use the implication (iii) = (i) inproposition 4.1. We show that there exists τ > such that X 1 (τ) >,X 2 (τ) >, where X i (t),i =1, 2isthe solution of the Cauchy problem: From the representation formula d dt X i(t) = A (i)x i (t)+x i (t)a (i)+ +C(i)C (i), X i () =, i =1, 2. (X 1 (t),x 2 (t)) = e L (t s) Cds 2 q ij X j (t) (4.16) it follows that X i (t) for all t. Therefore it is sufficient to show that there exists τ>such that detx i (τ) >. ( ) xi (t) y Set X i (t) = i (t),i=1, 2. After some simple cmputations we have detx i (t) = y i (t) z i (t) x i (t)z i (t) yi 2 (t) =x i (t)z i (t) = x(t) z(t),t where x(t) = 1 2 [e αs + e (2α q)s ]ds z(t) = 1 [e 2αs e 2(α q)s ]ds. 2 For detail see [4]. It is easy to see that for every α R,q >wehave lim t x(t) z(t) >. Remark 4.2 Let us consider the system of type (2.1)-(2.2) with n =2,d=2,p=1,r =1 and A (1) = A (2) = αi 2,C (1) = (1 ),C (2) = ( 1),A 1 (i) a2 2 arbitrary matrix, 7 j=1

[ q q Q = q q ],α R,q >. Combining the conclusion of Example 1 with Proposition 4.2 (ii) it follows that the system (C, (A,A 1 ); Q) isobservable. Example 2 The stochastic observability does not imply always stochastic detectability. Let us consider the system subjected to Markovian jumping with d =2,n=2,p=1, A (1) = A (2) = q [ q q 2 I 2,C (1) =[1 ],C (2) =[ 1],Q= q q ]. (4.17) From the previous example we conclude that the system (C,A ; Q) isobservable. Invoking (i) (ii) from Proposition 3.2 we deduce that if the system (4.4) would be stochastically [ ] γ1 (i) detectable, then would exist the matrices X(i) > and Γ(i) =,i =1, 2 which γ 2 (i) verify the following system of linear equations: A (i)x(i)+x(i)a (i)+γ(i)c (i)+c (i)γ (i)+ 2 q ij X(j)+I 2 =,i=1, 2 [ ] 2γ1 (1) γ which implies I 2 + 2 (1) < which is a contradiction. γ 2 (1) Example 3 Let us consider the stochastic system (2.1)-(2.2) with n = 2, d = 2, r = 1, p =1, A (1) = A (2) = αi 2, C (1) =[1 ], C (2) =[ 1], A 1 (1) = βi 2, A 1 (2) is a [ ] q q 2 2 arbitrary matrix, Q =,α R, β R, q> which satisfy 2α q+β 2 =. q q From Remark 4.2 it follows that the above system is stochastically observable. As in the previous example based on equivalence (i) (ii) from Proposition 3.2 we may show that it is not stochastically detectable (see citepreprint1 for details). The following two results show that the stochastic observability defined in this paper guarantees the positivity of the observability gramian and the fact that all semipositive solutions of the Riccati equations are stabilizing solutions as in the deterministic case. The proofs are omitted for shortness and may be found in [1]. Proposition 4.6 Assume that (C,A,...,A r ; Q) is observable and the algebraic equation on Sn d j=1 L X + C = (4.18) has a solution X. Then : (i) The system (A,A 1,...,A r ; Q) is stable. (ii) X >. (iii) The equation (4.5) has a unique positive semidefinite solution. 8

Let us consider the following system of general algebraic Riccati equations: A (i)x(i)+x(i)a (i)+ dq ij X(j) (X(i)B (i)+ A K(i)X(i)A k (i)+ j=1 A k(i)x(i)b k (i))(d (i)d(i)+ Bk(i)X(i)B k (i)) 1 (B(i)X(i)+ Bk(i)X(i)A k (i)) + C(i)C (i) = (4.19) where A k (i),c (i) are as in (2.1)-(2.2) and B k (i),d(i) are n m and p m given matrices. Such algebraic equations are closly related to the linear quadratic optimization problem for a linear stochastic system subjected both to Markovian jumping and multiplicative white noise (see [4, 1] for details). The following result is proved in [1]. Proposition 4.7 If (C, A; Q) is observable then any semipositive solution of the system (4.6) is a stabilizing solution. References [1] B.D.O.Anderson, J.B.Moore, New results in linear system stability, SIAM j. Control, vol. 7, no. 3, pp.398-413, 1969. [2] J.L.Doob, Stochastic processes, Wiley, New-York, 1967. [3] V. Dragan, T. Morozan, Stability and robust stabilization to linear stochastic systems described by differential equations with markovian jumping and multiplicative white noise, Stochastic Analysis and Application, nr. 1, 22 (to appear). [4] V. Dragan, T. Morozan, The linear quadratic optimization problem and tracking problem for a class of linear stochastic systems with multiplicative white noise and Markovian jumping, Preprint nr. 3/ 21, Institute of Mathematics od Romanian Academy. [5] A. Friedman, Stochastic differential equations and applications, Academic Press, vol. I, 1975. [6] Y. Ji, H.J.Chizeck, Controllability, stabilizability, and continuous-time Markovian jump linear quadratic control, IEEE Trans. Auto. Control, 35, no.7, pp. 777-788, 199. [7] R. Kalman, Contributions on the theory of optimal control, Buletin de la Sociedad Math. Mexicana, Segunda Serie, 5, 1, pp. 12-19, 196. 9

[8] T. Morozan, Optimal stationary control for dynamic systems with Markov perturbations, Stochastic Analysis Appl., 1, pp. 299-325, 1983. [9] T. Morozan, Stochastic uniform observability and Riccati equations of stochastic control, Revue Roum. Math. Pures et Appl., 9, 1993. [1] T. Morozan, Linear quadratic control and tracking problems for time-varying stochastic differential systems perturbed by a Markov chain, Revue Roum. Math. Pure et Applique, to appear. [11] B. Oksendal, Stochastic differential equations, Springer, Fifth Edition, 2. [12] W.H. Wonham, On a matrix Riccati equation of stochastic control, SIAM J. Control Optim., 6, pp.312-326, 1968. [13] W.H. Wonham, Random diferential equations in control theory, Probabilistic Methods in Applied Math.,2, (A.T.Barucha-Reid, Ed.) Academic Press, New-York, pp.131-212, 197. 1