Stochastic stability analysis for fault tolerant control systems with multiple failure processes

Similar documents
CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner

Stability of neutral delay-diœerential systems with nonlinear perturbations

Initial condition issues on iterative learning control for non-linear systems with time delay

ASYMPTOTIC STABILITY IN THE DISTRIBUTION OF NONLINEAR STOCHASTIC SYSTEMS WITH SEMI-MARKOVIAN SWITCHING

Stochastic Stability of Jump Linear Systems

On the Scalability in Cooperative Control. Zhongkui Li. Peking University

Linearized methods for ordinary di erential equations

INTEGRATED ARCHITECTURE OF ACTUATOR FAULT DIAGNOSIS AND ACCOMMODATION

Stochastic Processes

Proofs for Stress and Coping - An Economic Approach Klaus Wälde 56 October 2017

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities

2 Interval-valued Probability Measures

ECON2285: Mathematical Economics

Converse Lyapunov-Krasovskii Theorems for Systems Described by Neutral Functional Differential Equation in Hale s Form

Global stabilization of feedforward systems with exponentially unstable Jacobian linearization

8 Periodic Linear Di erential Equations - Floquet Theory

STABILIZATION OF LINEAR SYSTEMS VIA DELAYED STATE FEEDBACK CONTROLLER. El-Kébir Boukas. N. K. M Sirdi. Received December 2007; accepted February 2008

Appendix for "O shoring in a Ricardian World"

An LQ R weight selection approach to the discrete generalized H 2 control problem

E cient Simulation and Conditional Functional Limit Theorems for Ruinous Heavy-tailed Random Walks

SELF-REPAIRING PI/PID CONTROL AGAINST SENSOR FAILURES. Masanori Takahashi. Received May 2015; revised October 2015

JOINT PROBABILITY DISTRIBUTIONS

tion. For example, we shall write _x = f(x x d ) instead of _x(t) = f(x(t) x d (t)) and x d () instead of x d (t)(). The notation jj is used to denote

Fluid Heuristics, Lyapunov Bounds and E cient Importance Sampling for a Heavy-tailed G/G/1 Queue

An alternative theorem for generalized variational inequalities and solvability of nonlinear quasi-p M -complementarity problems

The ϵ-capacity of a gain matrix and tolerable disturbances: Discrete-time perturbed linear systems

Delay-independent stability via a reset loop

Time is discrete and indexed by t =0; 1;:::;T,whereT<1. An individual is interested in maximizing an objective function given by. tu(x t ;a t ); (0.

4.3 - Linear Combinations and Independence of Vectors

MOST control systems are designed under the assumption

EE C128 / ME C134 Feedback Control Systems

Problem Description The problem we consider is stabilization of a single-input multiple-state system with simultaneous magnitude and rate saturations,

Output-Feedback H Control of a Class of Networked Fault Tolerant Control Systems

A converse Lyapunov theorem for discrete-time systems with disturbances

Actuator and sensor selection for an active vehicle suspension aimed at robust performance

Stochastic Processes

Reliability of uncertain dynamical systems with multiple design points

An asymptotic ratio characterization of input-to-state stability

On the Stabilization of Neutrally Stable Linear Discrete Time Systems

ON ADAPTIVE CONTROL FOR THE CONTINUOUS TIME-VARYING JLQG PROBLEM

Multistage pulse tubes

Output Regulation of the Arneodo Chaotic System

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT

Fuzzy control of a class of multivariable nonlinear systems subject to parameter uncertainties: model reference approach

Nonlinear Programming (NLP)

Stochastic Hamiltonian systems and reduction

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

A Globally Stabilizing Receding Horizon Controller for Neutrally Stable Linear Systems with Input Constraints 1

Stability Analysis of a Proportional with Intermittent Integral Control System

MC3: Econometric Theory and Methods. Course Notes 4

State-Feedback Optimal Controllers for Deterministic Nonlinear Systems

Reduced-order modelling and parameter estimation for a quarter-car suspension system

ECON0702: Mathematical Methods in Economics

3 Random Samples from Normal Distributions

A new method to obtain ultimate bounds and convergence rates for perturbed time-delay systems

Linear-Quadratic Optimal Control: Full-State Feedback

CONSTRAINED MODEL PREDICTIVE CONTROL ON CONVEX POLYHEDRON STOCHASTIC LINEAR PARAMETER VARYING SYSTEMS. Received October 2012; revised February 2013

Copyrighted Material. 1.1 Large-Scale Interconnected Dynamical Systems

IN THIS PAPER, we consider a class of continuous-time recurrent

THE DESIGN OF ACTIVE CONTROLLER FOR THE OUTPUT REGULATION OF LIU-LIU-LIU-SU CHAOTIC SYSTEM

Auxiliary signal design for failure detection in uncertain systems

Networked Control System Protocols Modeling & Analysis using Stochastic Impulsive Systems

SIMILAR-ON-THE-BOUNDARY TESTS FOR MOMENT INEQUALITIES EXIST, BUT HAVE POOR POWER. Donald W. K. Andrews. August 2011

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Max-Min Problems in R n Matrix

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Positive Markov Jump Linear Systems (PMJLS) with applications

On an Output Feedback Finite-Time Stabilization Problem

Delay-dependent Stability Analysis for Markovian Jump Systems with Interval Time-varying-delays

Learning and Risk Aversion

Why fault tolerant system?

Notes on Asymptotic Theory: Convergence in Probability and Distribution Introduction to Econometric Theory Econ. 770

Nonlinear Systems and Control Lecture # 12 Converse Lyapunov Functions & Time Varying Systems. p. 1/1

Output Regulation of the Tigan System

1 Lyapunov theory of stability

Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties

Measuring robustness

Economics 241B Review of Limit Theorems for Sequences of Random Variables

ALMOST SURE STABILITY OF CONTINUOUS-TIME MARKOV JUMP LINEAR SYSTEMS: A RANDOMIZED APPROACH. Paolo Bolzern Patrizio Colaneri Giuseppe De Nicolao

CHATTERING-FREE SMC WITH UNIDIRECTIONAL AUXILIARY SURFACES FOR NONLINEAR SYSTEM WITH STATE CONSTRAINTS. Jian Fu, Qing-Xian Wu and Ze-Hui Mao

Norm invariant discretization for sampled-data fault detection

Multi-Model Adaptive Regulation for a Family of Systems Containing Different Zero Structures

Availability and Reliability Analysis for Dependent System with Load-Sharing and Degradation Facility

Supervisory Control of Petri Nets with. Uncontrollable/Unobservable Transitions. John O. Moody and Panos J. Antsaklis

Lecture 1. Stochastic Optimization: Introduction. January 8, 2018

EEE 303 Notes: System properties

ECON0702: Mathematical Methods in Economics

The Modi ed Chain Method for Systems of Delay Di erential Equations with Applications

Notes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 770

Observer design for rotating shafts excited by unbalances

SLIDING MODE FAULT TOLERANT CONTROL WITH PRESCRIBED PERFORMANCE. Jicheng Gao, Qikun Shen, Pengfei Yang and Jianye Gong

ONGOING WORK ON FAULT DETECTION AND ISOLATION FOR FLIGHT CONTROL APPLICATIONS

Solving Third Order Three-Point Boundary Value Problem on Time Scales by Solution Matching Using Differential Inequalities

Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components

Stochastic Nonlinear Stabilization Part II: Inverse Optimality Hua Deng and Miroslav Krstic Department of Mechanical Engineering h

Approximately Most Powerful Tests for Moment Inequalities

Economics 241B Estimation with Instruments

FAULT-TOLERANT CONTROL OF CHEMICAL PROCESS SYSTEMS USING COMMUNICATION NETWORKS. Nael H. El-Farra, Adiwinata Gani & Panagiotis D.

Feedback stabilization of Bernoulli jump nonlinear systems: a passivity-based approach

Deakin Research Online

Transcription:

Internationa l Journal of Systems Science,, volume, number 1, pages 55±5 Stochastic stability analysis for fault tolerant control systems with multiple failure processes M. Mahmoud, J. Jiang and Y. M. Zhang A new dynamical model is developed here to study the stochastic stability of Fault Tolerant Control Systems (FTCS) with multiple failure processes. In particular, two independent failure processes with Markovian characteristics are considered: one for plant component failures and the other for actuator failures. In this model, the Fault Detection and Isolation (FDI) process is also formulated as a Markov process with transition probabilities conditioned on the current state transition probabilities of the two failure processes. It is shown that the exponential stability in the mean square is su cient for almost sure asymptotic stability. In addition, a necessary and su cient condition for exponential stability in the mean square for FTCS is dered. Some previously known results are shown to be special cases of this new result. Failures in actuators with internal dynamics are also considered. A numerical example is included to demonstrate the theoretical analysis. 1. Introduction Safety-critical systems such as aircraft, space vehicles and nuclear power plants rely on Fault Tolerant Control Systems (FTCS) to improve reliability, maintainability and survability. The performance of these systems should be maintained not only during normal operation, but also in the case of malfunctions in sensors, actuators and plants. FTCS can be classi ed into two categories: acte and passe. Acte FTCS compensate for the e ects of failures either by selecting a new precomputed control law, or by synthesizing a new control law on-line. Both approaches need a Fault Detection and Isolation (FDI) scheme to identify the fault-induce d changes and to recon gure the control law. Thus, the FDI and control law redesign have to work jointly. The dynamic behavior of acte FTCS is governed by stochastic di erential equations because the failures and the FDI decisions are non-deterministic in nature (Willsky 17). An acte FTCS can be modelled as a general hybrid system, as it combines both the Euclidean Receed 4 January. Revised May 1. Accepted 7 June 1. Department of Electrical and Computer Engineering, Unersity of Western Ontario, London, Ontario, Canada NA 5B. space for system dynamics and the discrete space for fault-induced changes. Hybrid systems were rst studied by Kats and Krasovskii (1). They considered the stability of moments using the stochastic Lyapunov function approach. Bucy (15) and Kushner (17) studied sample path stability employing the supermartingal e property of some stochastic Lyapunov functions. In the literature mentioned above, a hybrid system is modelled as a linear di erential equation whose coe - cients vary randomly with Markovian characteristics. One class of hybrid systems is Jump Linear Systems (JLS). In JLS, the random jump process of the coe - cients is represented by a nite state Markov chain called `plant regime mode. The research in JLS covers two broad areas. The rst deals with dering necessary and/or su cient conditions for the existence of the optimal quadratic regulator (Sworder 1, Wonham 171, Hopkins 17, Boukas 1). The second deals with the properties, such as stability, controllability and observability, of this class of systems (Ji and Chizeck 1, Ji et al. 11, Feng et al. 1). It is important to mention that the models of JLS assume perfect knowledge of the regime. However, this is not the case in FTCS because one cannot generally assume perfect regime measurement due to failures. Therefore, it is important to consider the impact of the FDI process on the stochastic stability of the closed-loop system. International Journal of Systems Science ISSN ±771 print/issn 144±51 online # Taylor & Francis Ltd http://www.tandf.co.uk/journals DOI: 1.1/7711715

5 M. Mahmoud et al. This issue was emphasized by Mariton (1), where su cient conditions for the stability of hybrid systems with detection delays was dered. However, these results were based on the assumption of perfect regime knowledge. To relax this assumption and move one step closer to practical FTCS, another class of hybrid systems was de ned by Srichander and Walker (1). This class of systems is known as Fault Tolerant Control Systems with Markovian Parameters (FTCSMP). In FTCSMP, separate random processes with di erent state spaces are de ned. The rst process represents system component failures and the second represents decisions of the FDI process used to recon gure the control law. This model will allow not only for the study of detection delays, but also for the examination of errors in detection. Mahmoud et al. () carried out a robustness study of FTCSMP against parameter uncertainties. Unfortunately, the model proposed in Srichander and Walker (1) considered failures only in actuators, and actuators are assumed to be without any internal dynamics. In this work, two failure processes are used: one for plant components and the other for actuators. The situation with sensor failure has been dealt with elsewhere. The main reason for using two independent failure processes is that it allows the modelling of failures at di erent locations with independent failure characteristics. Furthermore, it permits the construction of conditional transition probabilities of the FDI process when there are delays or errors in detection with respect to each failure process indidually. Under some special conditions the two failure processes may have a common state space. In this case, they may be replaced by one equalent failure process as will be shown. One of the main goals of the current research is to relax the existing limitations and assumptions. Speci cally, we will develop a dynamical model that permits us to consider multiple failures at di erent locations in the system to be controlled, namely, in plant components and actuators. Since most actuators in practical systems have their own dynamics, results will be extended to actuators with internal dynamics. At this point, a stochastic FDI process with Markovian characteristics is de ned. The transition probabilities of the FDI process are conditioned on the current state of the two failure processes. In particular, a necessary and suf- cient condition for stochastic exponential stability in the mean square under these conditions is dered. These results are obtained without the restricte assumptions of instantaneous failure detection, certainty of correct isolation, limitation to failure in actuators without internal dynamics, or failures only in actuators. The paper makes signi cant contributions in the analysis of FTCS with some consideration to the practical aspects of applications. This paper is organized as follows: Section describes the dynamical model, the failure processes and the FDI process. A brief summary of basic terms, results and de nitions in stochastic systems are gen in Section. Stochastic stability of FTCSMP is de ned in Section 4. Section 5 deres a necessary and su cient condition for stochastic exponential stability in the mean square. A numerical example is gen in Section to validate the theoretical results. Finally, a concluding summary is gen in Section 7.. Mathematical formulation of FTCSMP.1. Dynamical model A FTCS subject to failures in plant components and actuators is shown in gure 1. The system under normal operation can be described by _x t ˆ A t x t B t u t : 1 where A R n n, B R n m, and x t R n represent the system state, and u t R m is the input. It is important to emphasize that the location of a fault and the nature of the faulty component are important when determining the appropriate dynamical model to describe the faulty system. In this paper, the random changes in plant components are represented by a homogeneous Markov process ± t with nite state space Z ˆ f1; ;... ; zg. Similarly, the random changes that occur in control actuators are represented by another homogeneous Markov process ² t with nite state space S ˆ f1; ;... ; sg. These two failure processes are not directly measurable but can be detected by an FDI process, ª t. For a single sample FDI test acting on signals with addite white noise, ª t has Markovian characteristics (Srichander and Walker 1). The state space of the FDI process is denoted by R ˆ f1; ;... ; rg. Reference input + - Figure 1. Actuators Failures Controller FDI Plant Failures Sensors Schematic diagram for FTCSMP subject to failures in plant components and actuators.

Stochastic stability of FTCS 57 The failure processes ± t, ² t, and the FDI process ª t are de ned in Section.. If the plant components undergo a change due to a component failure, the system matrix A will change accordingly. If the actuators are also subject to random variations due to malfunctions, then system input matrix B will also change. If any of the control actuators fail completely (total failure), then the corresponding column in the input matrix will be a zero vector and no actuating signal will be fed to the system from that particular actuator. For that reason, the system to be controlled must possess a su cient degree of actuator redundancy (Zhao and Jiang 1). The FDI process has to consider the combinations of changes in A and B. For example, the plant components may undergo one failure resulting in two system matrices that describe our model: A 1 for normal operation and A for faulty operation. The corresponding failure process will also have two states Z ˆ f1; g. Similarly, if the failures in the actuators result in three forms for the input matrix B 1 (normal), B and B (faulty matrices), the corresponding failure process will have three states S ˆ f1; ; g. In this case, the FDI process will have six states to identify the di erent failures encountered in the system, i.e. R ˆ f1; ; ; 4; 5; g. From gure 1, the control law for acte FTCS is only a function of the measurable FDI process ª t. Therefore, the FTCSMP can be modelled as _x t ˆ A ± t x t B ² t u x t ; ª t ; t u x t ; ª t ; t ˆ K ª t x t ; where x t R n is the system state, u x t ; ª t ; t R m is the input and K ª t is a constant gain matrix that depends on the FDI process. A ± t and B ² t are properly dimensioned matrices and are random in nature with Markovian transition characteristics. ± t ; ² t and ª t are separable and measurable Markov processes (Doob 1) with nite state spaces Z ˆ f1; ;... ; zg, S ˆ f1; ;... ; sg and R ˆ f1; ;... ; rg, respectely. In the sequel, A ± t ˆ A j when ± t ˆ j Z; B ² t ˆ B k when ² t ˆ k S and u x t ; ª t ; t ˆ u i when ª t ˆ i R. Also denote x t ˆ x; ± t ˆ ±; ² t ˆ ²; ª t ˆ ª and the initial conditions are x t o ˆ x o ; ± t o ˆ ± o ; ² t o ˆ ² o ; ª t o ˆ ª o :.. FDI and failure processes Recall that the random processes ± t ; ² t and ª t are assumed to be homogeneous Markov processes with nite state spaces Z; S and R, respectely. The transition probability for the plant component failure process, ± t, is de ned as: p jh t ˆ jh t o t j ˆ h p jj t ˆ 1 X jˆh jh t o t j ˆ h ; while the transition probability for the actuator failure process, ² t, is p kl t ˆ kl t o t k ˆ l p kk t ˆ 1 X kˆl kl t o t k ˆ l ; 4 where kl represents the actuator failure rate, and jh is the plant component failure rate. Gen that ± ˆ j and ² ˆ k, the conditional transition probability of the FDI process; ª t, is: p jk t ˆ qjk t o t i ˆ v p jk ii t ˆ 1 X iˆv t o t i ˆ v : 5 Depending on the indices j; k; i and v, di erent interpretations can be gen to, such as the rate of false alarm, correct detection and isolation, etc. It is important to mention that the rates are determined by the nature of the FDI process. These rates are vital in deciding the stochastic stability of the closed-loop system (Srichander and Walker 1, Mahmoud et al. 1, ). In other words, the stochastic stability of FTCSMP depends on the performance of the FDI process through.. Basic de nitions In this section, some concepts of stochastic stability are brie y stated without proof. For more details, see Bucy (15), Kushner (17), Khasminskii (1) and Doob (1). Under the assumption that the system () satis es the global Lipschitz condition, the solution x t determines a family of unique continuous stochastic processes, one for each choice of the random variable x t o. The joint process fx; ±; ²; ªg ˆ fx t ; ± t ; ² t ; ª t ; t Ig is a Markov process..1. Stochastic Lyapunov function A very important tool in the stability analysis of stochastic systems is the stochastic Lyapunov function. It is used to describe the stability property without explicit solution to the di erential equations. Kushner (17) stated the conditions that a stochastic function must meet to qualify as a stochastic Lyapunov function. De nition 1: The random function V x; ±; ²; ª; t of the joint Markov process fx; ±; ²; ªg quali es as a stochastic

5 M. Mahmoud et al. Lyapunov function candidate if the following conditions hold for some xed < 1. (a) The function V x; ±; ²; ª; t is posite de nite and continuous in x in the open set O ˆ fx t : V x; j; k; i; t < g j Z, k S, i R and t, V x; ±; ²; ª; t ˆ only if x ˆ. (b) The joint Markov process fx; ±; ²; ªg is de ned until at least some ½ ˆ inf ft: x t = O g with probability one, if x t O then ½ ˆ 1. (c) The function V x; ±; ²; ª; t is in the domain of the weak in nitesimal operator of the joint Markov process fx ½ t ; ± ½ t ; ² ½ t ; ª ½ t ; ½ t g, where ½ t ˆ min t; ½... Weak in nitesimal operator The weak in nitesimal operator can be considered as the derate of the function Vfx; ±; ²; ª; tg along the trajectory of the joint Markov process fx; ±; ²; ª; tg which emerges at the point fx; ± ˆ j; ² ˆ k; ª ˆ ig at time t. The weak in nitesimal operator is de ned as: De nition : Let the joint Markov process fx t ; ± t ; ² t ; ª t g be denoted by À t. Then the continuous function V x t ; ± t ; ² t ; ª t ; t represented as V À t ; t with a bounded time derate V t À t ; t for every À t is said to be in the domain of the weak in nitesimal operator `, if the limit "fv À t ½ ; t ½ j À t ; t gv À t ; t lim ½!o ½ ˆ lim ½!o "fv t À t ½ ; t g ½>!o lim ½!o "fv À t ½ ; t j À t ; t g V À t ; t ½ ˆ V t À t ; t h À t ; t ˆ `V À t ; t exists pointwise, and satis es lim "fv t À t ½ ; t ½ g h À t ½ ; t ½ ½!o ˆ V t À t ; t h À t ; t ; where "fv À t ½ ; t ½ j À t ; t, is the conditional expectation of the stochastic Lyapunov function at time t ½, gen its value at time t. h À t ; t is the weak in nitesimal operator of the function V À t ; t ˆ V x t ; ± t ; ² t ; ª t ; t when t is xed. 4. Stochastic stability There are several de nitions for stochastic stability in the literature. They are extensions to deterministic stability in the three modes of convergence: convergence in probability, convergence in the mean, and almost sure convergence (Kozin 1). Despite the di erent de nitions, it is the almost sure convergence that is of prime interest when considering practical systems (Loparo and Feng 1). In the context of fault-tolerant control, it is important to consider the almost sure asymptotic stability and the stochastic exponential stability in the mean square. This section de nes and states the theorems that guarantee both forms of stochastic stability for the FTCSMP in (). De nition : The solution x t ˆ of the system () is said to be almost surely asymptotically stable if for any ± o Z, ² o S, ª o R, " >, >, there exists "; ; t o > such that for any kx o ˆ x ± o; ² o ; ª o ; t o k <, we have: ¼ P sup µtµ1 kx t; x o ; t o k > " µ and ¼ P lim sup kx t; x o ; t o k ˆ ˆ 1: t!1 t t o De nition 4: The solution x t ˆ of the system () is said to be exponentially stable in the mean square if, for any ± o Z, ² o S, ª o R, and some >, there exist two constants a > and b > such that when kx o ˆ x ± o; ² o ; ª o ; t o k µ, the following inequality holds t t o : Efkx t; x o ; t o k g µ bkx o k expf a t t o g: For a nite state Markov FDI process, the following theorems of stochastic stability are applicable to the dynamical system (). These theorems were originally dered and proved (Kats and Krasovskii 1) for the stochastic Lyapunov function V x t ; t ; t where t is the Markov jump process. An extension to the proof of these theorems was carried out (Srichander and Walker 1) for the proposed Lyapunov function V x t ; r t ; ² t ; t. Using similar arguments we can prove the theorems for the stochastic Lyapunov function V x t ; ± t ; ² t ; ª t ; t. However, they are not shown here to avoid repetition. The interested reader may refer to the mentioned references. Theorem 1: Assume that V x t ; ± t ; ² t ; ª t ; t is a stochastic Lyapunov function, and let the weak in nitesimal operator `V x t ; ± t ; ² t ; ª t ; t µ N x t ; ± t ; ² t ; ª t ; t < in the open set O for the system () when ± t Z, ² t S, and ª t R, where N x t ; ± t ; ² t ; ª t ; t is continuous in x t ; t and N x t ; ± t ; ² t ; ª t ; t ˆ only if x t ˆ, then the solution x t ˆ of the system () is almost surely asymptoticall y stable.

Stochastic stability of FTCS 5 Theorem : The solution x t ˆ of the system () is exponentially stable in the mean square if and only if there exists a Lyapunov function V x t ; ± t ; ² t ; ª t ; t that satis es, for some constants < c 1 < c, c >, (a) c 1 kx t k µ V x t ; ± t ; ² t ; ª t ; t µ c kx t k, (b) `V x t ; ± t ; ² t ; ª t ; t µ c kx t k. The following theorem is used to de ne the necessary condition for exponential stability in the mean square of the system (). Moreover, in this work it will also be used as the su cient condition for almost sure asymptotic stability. Theorem : If the system () is exponentially stable in the mean square, then for any gen posite de nite function W x t ; ± t ; ² t ; ª t ; t which is bounded and continuous t t o ; ± t Z; ² t S and ª t R, there exists a posite de nite function V x t ; ± t ; ² t ; ª t ; t of the same order which satis es the conditions of Theorem and such that `V x t ; ± t ; ² t ; ª t ; t ˆ W x t ; ± t ; ² t ; ª t ; t. This posite de nite function, V x t ; ± t ; ² t ; ª t ; t, actually satis es both conditions of Theorems 1 and. Therefore, a very important conclusion is that the exponential stability in the mean square implies almost sure asymptotic stability. That is, a su cient (but not necessary) condition for almost sure stability for the equilibrium solutions of the system () is established. In other words, only one set of conditions needs to be satis ed to guarantee both types of stochastic stability. 5. A necessary and su cient condition for exponential stability In this section, a necessary and su cient condition for the exponential stability of the FTCSMP () under the state feedback u i ˆ K i x i R is dered. The stability must be maintained not only under normal operation, but also when there are failures in the plant components, the actuators, or any combination thereof. Let V x t ; ± t ; ² t ; ª t ; t be the stochastic Lyapunov function of the joint Markov process fx t ; ± t ; ² t ; ª t g. From De nition, the weak in nitesimal operator for the system () at the point fx ˆ x; ± ˆ j; ² ˆ k; ª ˆ i; tg is gen by: `V x; ±; ²; ª; t ˆ @V @t X hz @V f x; j; k; i; t ; @x jh V x; h; k; i; t V x; j; k; i; t Š X kl V x; j; l; i; t V x; j; k; i; t Š X vr V x; j; k; v; t V x; j; k; i; t Š: The results of Theorems 1± are also applicable to the FTCSMP (). However, the conditions are di cult to test and to verify. We will thus state and dere a testable necessary and su cient condition for the exponential stability in the mean square for the FTCSMP using the weak in nitesimal operator de ned in (). Theorem 4: A necessary and su cient condition for exponential stability in the mean square of the FTCSMP () under the control law u i ˆ K i x, i R, is that there exist steady-state solutions P jki t > ; j Z; k S and i R as t! 1 to the following coupled matrix di erential equations: X _P jki t ~A T jkip jki t P jki t ~A jki jh P hki t X kl P jli t X vr hz P jkv t Q jki ˆ ; where P jki ˆ, and Q jki > with ~A jki gen by ~A jki ˆ A j B k K i 1 I X hz jh 1 I X kl 1 I X vr : Proof of necessity: Assume that the dynamic system () is exponentially stable in the mean square under the control law u i ˆ K i x i R. By Theorem 4 there exists a quadratic posite function V x t ; ± t ; ² t ; ª t ; t such that `V x t ; ± t ; ² t ; ª t ; t ˆ W x t ; ± t ; ² t ; ª t ; t <. Consider the following quadratic stochastic Lyapunov function for the FTCSMP (): V x t ; ± t ; ² t ; ª t ; t ˆ x T t P ± t ; ² t ; ª t x t : 7 The weak in nitesimal operator in () can be written as: `V x; ±; ²; ª; t ˆ x T _P jki t x x T P jki t A j x x T P jki t B k u i x T A T j P jki t x u T i B T k P jki t x

M. Mahmoud et al. x T jh P hki t P jki t Š >: >; x hz hj x T kl P jli t P jki t Š >: >; x x T >: vr vi P jkv t P jki t Š x: >; With the state feedback u i ˆ K i x, the weak in nitesimal operator becomes: `V x; ±; ²; ª; t ˆ x T _P jki t x x T P jki t A j x x T P jki t B k K i x x T A T j P jki t x x T Ki T B T k P jki t x x T jh P hki t P jki t Š >: >; x hz x T kl P jli t P jki t Š >: >; x x T >: vr vi P jkv t P jki t Š x: >; Let Setting W x t ; ± t ; ² t ; ª t ; t ˆ x T t Q ± t ; ² t ; ª t x t > : 1 `V x t ; ± t ; ² t ; ª t ; t ˆ W x t ; ± t ; ² t ; ª t ; t : We have: x T _P jki t ~A T jkip jki t P jki t ~A jki ˆ : X jh P hki X kl P jli t X vr 1 P jkv t Q jki ¼x 14 Let jki t; ½ ˆ exp ~A jki t ½ be the fundamental matrix associated with ~A jki, then the solutions of the coupled di erential equations under the boundary condition P jki ˆ are t P jki t ˆ T jki t; ½ X X vr kl P jli ½ " X jh P hki ½ hz P jkv ½ Q jki # jki t; ½ d½: 15 De ne ~A jki ˆ A j B k K i 1 I X hz jh 1 I X kl 1 I X vr ; 1 where I is the identity matrix. Rearranging terms we have `V x; ±; ²; ª; t ˆ x T _P jki t ~A T jkip jki t P jki t ~A jki X hz X vr jh P hki t X kl P jli t ¼ P jkv t x: 11 Similar coupled ordinary di erential equations have been studied in detail (Wonham 171). For the nonsingular matrices jki t; ½ and posite de nite matrices Q jki t, the solutions are unique, continuous on t 1; Š and P jki t > j Z; k S, and i R. These solutions are monotonically increasing on ( 1; Š as t! 1. They are bounded and will converge to steady-state solutions. Proof of su ciency: Assume that steady-state solutions fp jki t > ; j Z; k S; i Rg for the coupled matrix di erential equations under the boundary conditions P jki ˆ exist, then V x t ; ± t ; ² t ; ª t ; t ˆ x T t P ± t ; ² t ; ª t x t is a stochastic Lyapunov function and satis es conditions (a)±(c) in De nition 1 and condition (a) in Theorem. That is V x t ; ± t ; ² t ; ª t ; t is posite de nite, bounded, continuous and in the domain of the weak in nitesimal operator. Furthermore, the steady-state solutions of P jki t imply that fp jki t > ; j Z; k S; i Rg satisfy the

Stochastic stability of FTCS 1 coupled matrix di erential equations in Theorem, that is: >< >: or _P jki t ~A T jkip jki t P jki t ~A jki X hz X hz jh P hki X kl P jli t X vr jh P hki P jkv t Q jki >; ˆ ( x T _P jki t ~A T jkip jki t P jki t ~A jki X jh P hki hz ) X kl P jli t X P jkv t vr x ˆ x T Q jki x: (1) 17 For u i ˆ K i x i R, the weak in nitesimal operator `V x t ; ± t ; ² t ; ª t ; t is gen by (11) with ~A jki gen by (1). Therefore, it follows that: `V x t ; ± t ; ² t ; ª t ; t ˆ x T t Q ± t ; ² t ; ª t x t < : 1 By Theorem the system under the control law u i ˆ K i x i R is exponentially stable in the mean square t. Hence, the proof is complete. For a gen control law and relying on Theorem 4, one can verify the existence of the steady-state solutions of fp jki t > ; j Z; k S; i Rg. If the bounded solutions exist, the system () is exponentially stable in the mean square. Theorem guarantees that the system is also almost surely asymptotically stable. Remarks: Under certain assumptions, several special cases of the above general result can be dered. Some of these cases were considered by other researchers for the stochastic stability of hybrid systems. Others are new and have not been considered in the literature. It is important to consider the nature of faulty components, the occurrence of failures at di erent locations in the system, and the nature of the FDI process. 5.1. Plant components failures In this case, only system matrix A is subject to change due to random failures in one or more plant components. The system model then becomes: _x t ˆ A j x t Bu t j Z: 1 The coupled matrix di erential equations in Theorem 4 become: _P ji t ~A T ji P ji t P ji t ~A ji X lˆj X vr where ~A ji is de ned as q j P vj t Q ji ˆ P ji ˆ ; t 1; Š; ~A ji ˆ A j BK i 1 I X lˆj jl 1 I X vr vi jl P li t q j : 1 The necessary and su cient condition for the stochastic exponential stability of this type of system is the existence of steady-state solutions for equation (). 5.. Failures in actuators with no dynamics In this case, integrity of the plant components is assumed, and the actuators have no internal dynamics. Therefore, only the input matrix B may change as a result of the random failures in the actuators. The system can be described by: with ~A ki de ned as _x t ˆ Ax t B k u t k S ~A ki ˆ A B k K i 1 I X kl 1 I X q k : vr vi The bounded solutions of the following coupled matrix di erential equation _P ki t ~A T kip ki t P ki t ~A ki X X vr q k P vk t Q ki ˆ P ki ˆ ; t 1; Š; kl P li t 4 is the necessary and su cient condition for the stochastic exponential stability of the system (). This leads to a similar result obtained by Srichander and Walker (1). 5.. Failures in actuators with dynamics In this case, random failures are in actuators with internal dynamics. The system is modelled by _x t ˆ A k x t B k u t k Z ˆ S: 5 Note that both the system matrix A and the input matrix B have the same failure index. This means that one failure induces simultaneous changes in both matrices. In other words, the two failure processes are replaced

M. Mahmoud et al. with one failure process in the system (). The transition probability for the failure process is p kl t ˆ kl t o t k ˆ l p kk t ˆ 1 X kˆl kl t o t k ˆ l : The necessary and su cient condition for the stochastic stability of this system is the existence of bounded solutions to the following coupled matrix di erential equation where _P ki t ~A T kip ki t P ki t ~A ki X kl P li t X q k P vk t Q ki ˆ vr P ki ˆ ; t 1; Š; ~A ki ˆ A k B k K i 1 I X 5.4. Nature of the FDI process kl 1 I X vr q k : 7 In this case, the FDI process is assumed to detect instantaneously and always correctly isolate failures. Therefore, the two failure processes and the FDI process are assumed to have identical state spaces. This situation is similar to a JLS. The transition probability for the common failure process is p il t ˆ il t o t i ˆ l p ii t ˆ 1 X iˆl il t o t i ˆ l : The conditional transition probability of the FDI process for this case will become q k ˆ v k ˆ 1 v ˆ k v ˆ k and ~A ii ˆ A i B i K i 1 I X lr lˆi il : 1 This is the result similar to the one gen by Wonham (171).. Numerical example To illustrate the concepts presented above, we consider a scalar system with one possible failure in the actuator, i.e. S ˆ f1; g, and one possible failure in the plant components, i.e. Z ˆ f1; g. Both failure processes have Markovian transition characteristics. The FDI process is also Markovian with four states R ˆ f1; ; ; 4g. The following numerical parameters are used: A 1 ˆ 1:Š; A ˆ :5Š; B 1 ˆ :Š; B ˆ :Š: 1 ˆ :5; 1 ˆ :1; 1 ˆ :; 1 ˆ :1: q 11 q 1 q 1 q : :1 :1 :1 1: 1: : : ˆ 4 1: : 1: : 7 5 1: : : 1: 1: 1: : : :1 : :1 :1 ˆ 4 : 1: 1: : 7 5 : 1: : 1: 1: : 1: : : 1: 1: : ˆ 4 :1 :1 : :1 7 5 : : 1: 1: 1: : : 1: : 1: : 1: ˆ : 4 : : 1: 1: 7 5 :1 :1 :1 : Note that the open-loop system is unstable. The objecte is to test the existence of the steady-state solutions fp jki > ; j Z; k S; i Rg under a certain precomputed control law; u i ˆ K i x i R. As per Theorem 4, the existence of the steady-state solutions guarantee s the exponential stability in the mean and the almost sure asymptotic stochastic stability. Since the FDI process has four states, there are four controller gains. The rst set is K 1 ˆ ; K ˆ ; K ˆ and K 4 ˆ 5. The second set is K 1 ˆ ; K ˆ ; K ˆ, and K 4 ˆ 5. The solutions of P jki > under the boundary conditions P jki ˆ are shown in gures and, respectely. For the rst set of controller gains, the steady-state solutions exist. However, the solutions are unbounded as t! 1 for the second set of controller gains. According to Theorem 4, the system is exponentially stable in the mean square and almost sure asymptotically stable for K 1 ˆ ; K ˆ ; K ˆ and K 4 ˆ 5, but is not for K 1 ˆ ; K ˆ ; K ˆ and K 4 ˆ 5. It is worthwhile to mention that deterministic stability does not imply stochastic stability. It is easy to check that the deterministic stability is guaranteed as long as K 1 > 1:5, K > 5, K > :5 and K 4 > :5. The selected controller gains in the two sets (in fact)

Stochastic stability of FTCS.. P111 1 P11 Solutions of coupled differential equations.7..5.4...1 P11 P114 P11 Solutions of coupled differential equations 14 1 1 4 P1 P14 P1-1 - - -7 - -5-4 - - -1-1 - - -7 - -5-4 - - -1.4 P11.5 Solutions of coupled differential equations.5..5..15.1.5 P1 P14 P1 Solutions of coupled differential equations 1.5 1.5 P1 P P4 P -1 - - -7 - -5-4 - - -1-1 - - -7 - -5-4 - - -1 Figure. Bounded solutions with K 1 5, K 5, K 5, and K 4 5 5. guarantee the deterministic closed-loop stability. However, as illustrated, the stochastic stability is only ensured for the rst set of controller gains. 7. Conclusion A dynamical model for acte fault tolerant control systems with multiple failure processes, for the purpose of studying stochastic stability, has been developed. In particular, a necessary and su cient condition for exponential stability in the mean square has been dered. It has been shown that exponential stability in the mean square is su cient for almost sure asymptotic stability. The proposed model uses two separate failure processes to describe all possible combinations of plant component failures and actuator failures. Existing results in stochastic stability analysis have been shown as special cases of this general result. Moreover, this model takes into consideration the failure of actuators with internal dynamics. A numerical example is included to illustrate the theoretical results, and to demonstrate that deterministic stability does not necessarily imply stochastic stability.

4 M. Mahmoud et al. 5 P111 Solutions of coupled differentail equations 7 5 4 1 P11 P114 P11 Solutions of coupled differentail equations 15 1 5 P11 P1 P14 P1-1 - - -7 - -5-4 - - -1-1 - - -7 - -5-4 - - -1.4 P11 4.5 Solutions of coupled differentail equations.5..5..15.1.5 P1 P14 P1 Solutions of coupled differentail equations 4.5.5 1.5 1.5 P1 P P4 P -1 - - -7 - -5-4 - - -1-1 - - -7 - -5-4 - - -1 Figure. Unbounded solutions with K 1 5, K 5, K 5 and K 4 5 5. References Boukas, E. K., 1, Control of systems with controlled jump Markov disturbances. Control Theory and Advanced Technology,, 577±55. Bucy, R. S., 15, Stability and posite supermartigales. Journal of Di erential equations, 1, 151±155. Doob, J. L., 1, Stochastic Processes (New York; Wiley). Feng, X., Loparo, K. A., Ji, Y., and Chizeck, H. J., 1, Stochastic stability properties of jump linear systems. IEEE Transactions on Automatic Control, 7, ±5. Hopkins, W., 17, Optimal stabilization of families of linear stochastic di erential equations with jump coe cients and multiplicate noise, SIAM Journal of Control and Optimization, 5, 157±15. Ji, Y., and Chizeck, H. J., 1, Controllability, stabilizability, and continuous-time Markovian jump linear quadratic control. IEEE Transactions on Automatic Control, 5, 777±7. Ji, Y., Chizeck, H. J., Feng, X., and Loparo, K. A., 11, Stability and control of discrete-time jump linear systems. Control Theory and Advanced Technology, 7, 47±7. Kats, I. I., and Krasovskii, N. N., 1, On stability of systems with random parameters. Journal of Applied Mathematics and Mechanics, 4, 15±1. Khasminskii, R. Z., 1, Stochastic Stability of Di erential Equations S. Swierczkowski, B. V. Alphen van den Rijn (eds.) (Sijtho & Noordho, The Netherlands).

Stochastic stability of FTCS 5 Kozin, F., 1, A survey of stability of stochastic systems. Automatica, 5, 5±11. Kushner, H. J., 17, Stochastic Stability and Control (New York: Academic Press). Loparo, K., and Feng, X., 1, Stability of stochastic systems. In William S. Levine (ed.) The Control Handbook (Boca Raton, FL.: CRC Press), pp. 115±11. Mahmoud, M. M., Jiang, J., and Zhang, Y. M., 1, Analysis of the stochastic stability for fault tolerant control systems. Proceedings of the th IEEE Conference on Decision and Control, Phoenix, AZ, pp. 1±1. Mahmoud, M. M., Jiang, J., and Zhang, Y. M.,, Stochastic stability of fault tolerant control systems with system uncertainties. Proceedings of the American Control Conference, Chicago, IL, pp. 44±4. Mariton, M., 1, Detection delays, false alarm rates and recon guration of control aystems. International Journal of Control, 4, 1±. Srichander, R., and Walker, B. K., 1, Stochastic stability analysis for continuous-time fault tolerant control systems. International Journal of Control, 57, 4±45. Sworder, D. D., 1, Feedback control of a class of linear systems with jump parameters. IEEE Transactions on Automatic Control, 14, ±14. Willsky, A. S., 17, A survey of design methods for failure detection in dynamic systems. Automatica, 1, 1±11. Wonham, W. M., 171, Random di erential equations in control theory, Probabilistic Methods in Applied Mathematics (Academic Press: New York), vol., pp. 11±1. Zhao, Q., and Jiang, J., 1, Reliable state feedback control system design against actuator failures. Automatica, 4, 17±17.