AN ADDENDUM TO THE PROBLEM OF STOCHASTIC OBSERVABILITY
|
|
- Lucas McDaniel
- 5 years ago
- Views:
Transcription
1 AN ADDENDUM TO THE PROBLEM OF STOCHASTIC OBSERVABILITY V. Dragan and T. Morozan Institute of Mathematics of the Romanian Academy, P.O.Box , RO-77, Bucharest, Romania Abstract In this paper we introduce a concept of stochastic observability for a class of linear stochastic systems subjected both to multiplicative white noise and Markovian jumping. The definition of stochastic observability adopted here extends to this framework the definition of uniform observability of a time varying linear deterministic system. By several examples we show that the concept of stochastic observability introduced in this paper is less restrictive than those introduced in some previous papers. Also we show that the concept of stochastic observability introduced in this paper, does not imply always the stochastic detectability as it would be expected. Keywords. Linear stochastic systems, Markovian jumping, multiplicative white noise, stochastic observability. 1 Introduction Both the observability property and controlability property of a linear system plaied a crucial role in solving a wide class of control problems. Here we recall that starting with the pioneer paper of Kalman [7], the observability and controlability properties provide sufficient conditions guaranteeing the existence of the stabilizing solution of a matrix Riccati differential equation which is connected with the linear quadratic problem and filtering problem [12, 13]. In stochastic framework the concept of stochastic observability was introduced in order to provide conditions which guarantee the existence of the stabilizing solutions of the matrix Riccati differential equations of stochastic control [9]. For the linear systems subjected to Markovian jumping some definitions of stochastic observability were introduced in [8, 6]. In this paper we introduce the concept of stochastic observability for a class of linear stochastic systems subjected both to multiplicative white noise and Markovian jumping. The definition of stochastic observability adopted here extends to this framework the definition of uniform observability of a time varying linear deterministic system [7, 1]. By several examples we show that the concept of stochastic observability introduced in this paper is less restrictive than those introduced by [8] and [6]. Also we show that the concept of stochastic observability introduced in this paper, does not imply always the stochastic detectability as it would be expected. Finally we show that 1
2 the stochastic observability introduced here guarantees the positivity of the observability Gramian (if it exists) and additionaly any positive solution of corresponding Riccati equation is stabilizing as it happens in deterministic case. 2 Stochastic observability Consider the system: dx(t) = A (η(t))x(t)dt + A k (η(t))x(t)dw k (t) (2.1) y(t) = C (η(t))x(t) (2.2) x(t) R n,y R p,w(t) =(w 1 (t)... w r (t)),t isastandard Wiener process on a given probability space (Ω, F, P); η(t), t isaright continuous homogenous Markov chain, with the state space set D = {1, 2,..., d} and the probability transition matrix P (t) =e Qt,t >,Q=[q ij ] with d j=1 q ij =,i D,q ij,i j (see[2]). Throughout this paper assume that {w(t)} t and {η(t)} t are independent stochastic processes, and P{η() = i} > for all i D.Foreach t and x R n,x(t, t,x ) stands for the solution of (2.1) with initial condition x(t,t,x )=x.(for precise definition of the solution see [5, 11]). Let Φ(t, t )bethe fundamental matrix solution of (2.1). That is the j-th column of Φ(t, t ) is x(t, t,e j ), j =1, 2,..., n, where e j =( ) being vector of canonic basis of R n. Now we are able to introduce the definition of stochastic observability. Definition 2.1 We say that the system (2.1)-(2.2) is stochasticaly observable, or the triple (C,A,A 1,..., A r ; Q) isobservable if there exist β>,τ > such that +τ E[ Φ (s, t)c(η(s))c ) (η(s))φ(s, t)ds η(t) =i] βi n, (2.3) t i D, ( )t, E[ η(t) = i] being the conditional expectation with respect to the event η(t) =i. Sometimes we shall write (C, A; Q) isobservable instead of (C,A,A 1,..., A r ; Q) isobservable. Remark 2.1 a) In the particular case D = {1} and A k =1 k r the inequality (2.3) reduces to the well known definition of uniform observability for a time varying linear deterministic system (see e.g. [7, 1]). 2
3 b) If D = {1} the equations (2.1)-(2.2) become dx(t) = A x(t)dt + A k x(t)dw k (t) (2.4) y(t) = C x(t). (2.5) In this case (2.3) is +τ E[ Φ (s, t)cc Φ(s, t)ds] βi n (2.6) t for all t, Φ(s, t) being now the fundamental matrix solution of (2.4). c) In the case d 2,A k (i) =, 1 k r, i D, (2.1)-(2.2) reduce to ẋ(t) = A (η(t)) (2.7) y(t) = C (η(t))x(t) (2.8) and in this case if (2.3) holds with Φ(s, t) standing for the fundamental matrix solution of (2.7) we say that the triple (C,A ; Q) isobservable. 3 Some auxiliary results 3.1 Liapunov type operators and exponential stability Let S n R n n be the space of symmetric matrices and S d n = S n S n... S n (d factors). Let L : S d n S d n be defined by (LX)(i) =A (i)x(i)+x(i)a (i)+ A k (i)x(i)a k(i)+ d q ij X(j) (3.9) ( ) i D,X =(X(1)... X(d)) S d n. The operator L will be termed as the Liapunov type operator defined by the system (A,A 1,..., A r ; Q). It is easy to see that the adjoint operator L is given by (L X)(i) =A (i)x(i)+x(i)a (i)+ A k(i)x(i)a k (i)+ j=1 d q ij X(j) (3.1) L : S d n S d n is the linear operator obtained from (3.1) taking A k (i) =,1 k r, i D. Consider the linear differential equation on S d n d S(t) =LS(t) (3.11) dt and set e Lt the linear evolution operator defined on Sn d by (3.3). 3 j=1
4 The following result was proved in [3] in a more general setting. Proposition 3.1 a) e Lt X e L t X, e L t X e L t X, ( ) X =(X(1) X(2)...X(d)) S d n,x(i),i D,t. b) [e L (t s) X](i) = E[Φ (t, s)x(η(t))φ(t, s) η(s) = i] ( ), t s >,i cald, X = (X(1) X(2)... X(d)) S d n. Definition 3.1 We say that the zero solution of the equation (2.1) is mean square exponentially stable (MSES for shortness), or that the system (A,A 1,...A r ; Q)isstable if there exist α>,β > such that E[ x(t) 2 ] βe αt x() 2, ( )t,x() R n. The following result provides necessary and sufficient conditions for the MSES of the zero solution of (2.1). Theorem 3.1 The following are equivalent: (i) The solution x(t) =of the equation (2.1) is MSES. (ii) lim t E[ x(t) 2 ]=for any solution x(t) of (2.1). (iii) The eigenvalues of the operator L are located in the half plane Re λ <. (iv) There exists H =(H(1) H(2)... H(d)) S d n with H(i) >,i D, such that the equation LX + H = has a solution X =(X(1)... X(d)),X(i) >,i D. (v) There exists H as before such that the equation L X + H = has a solution X =(X(1)... X(d)),X(i) >,i D. (vi) There exists X =(X(1)... X(d)),X(i) > such that L X<. For detailed proof see [3]. 3.2 Stochastic detectability Definition 3.2 We say that the system (2.1)-(2.2) is stochasticaly detectable, or that the triple (C, A; Q) isdetectable if there exist L =(L(1) L(2)... L(d)), L(i) R n p such that the zero solution of the equation dx(t) =(A (η(t)) + L(η(t))C (η(t)))x(t)dt + is MSES. L will be termed stabilizing injection. 4 A k (η(t))x(t)dw k (t)
5 Proposition 3.2 The following are equivalent: (i) The triple (C, A; Q) is detectable. (ii) The system of linear equations A (i)x(i)+x(i)a (i)+c(i)γ (i)+γ(i)c (i)+ d A k(i)x(i)a k (i)+ q ij X(j)+I n = (3.12) has a solution X =(X(1)... X(d)) S d n, Γ=(Γ(1)... Γ(d)), X(i) >, Γ(i) R n p,i D. (iii) The linear matrix inequality j=1 A (i)x(i)+x(i)a (i)+c(i)γ (i)+γ(i)c (i)+ dq ij X(j) < (3.13) A k(i)x(i)a k (i)+ j=1 has a solution X =(X(1) X(2)... X(d)) S d n, Γ=(Γ(1) Γ(2)... Γ(d)), X(i) >, Γ(i) R n p,i D. Moreover, if (X, Γ),X > is a solution of (3.4), then L =(L(1) L(2)... L(d)), L(i) =X 1 (i)γ(i) provides a stabilizing injection. 4 Main results Based on Proposition 3.1 (b) we obtain: Proposition 4.1 The following are equivalent: (i) The system (2.1)-(2.2) is stochastically observable. (ii) There exists τ> such that τ e L s Cds > (4.14) C =( C(1)... C(d)), C(i) =C (i)c (i),i D. (iii) There exists τ> such that X (τ) >, where X (t) is the solution of the problem with initial value d dt X (t) =L X (t)+ C, X() =. (4.15) Proof. (i) (ii) follows from Proposition 3.1 (b). Since X (t) = e L (t s) Cds = e L s Cds, t it follows that (iii) (ii). The proof is complete. 5
6 Further from Porposition 4.1 and Proposition 3.1 (a) we obtain: Proposition 4.2 The following hold: (i) If for each i D, the pair (C (i),a (i)) is observable (in deterministic sense) then the triple (C,A ; Q) is observable. (ii) If (C,A ; Q) is observable, then (C, A; Q) is observable. Proposition 4.3 Let X (t) be the solution of the problem with initial value (4.2). If there exists τ>such that X (τ) > then X (t) > for all t>. Proof. For each t>, we write the representation X (t) =(X (t, 1), X (t, 2),..., X (t, d)) = e L (t s) Cds. Since e L (t s) : Sn d Sn d is a positive operator, we deduce that X (t), for all t. Moreover if t τ we have X (t) X (τ), therefore if X (τ) >, we have X (t) > for all t τ. It remains to show that X (t) >, <t<τ. To this end we show that detx (t, i) >, <t<τ, i D. Indeed, since detx (t, i) =det { el (t s) Cds)(i) },we deduce that t detx (t, i) isananalytic function. The set of its zeros on [,τ] has no accumulation point. In this way it will follow that there exist τ 1 > such that detx (t, i) > for all t (,τ 1 ]. Invoking again the monotonicity of the function t X (t) weconclude that X (t) > for all t τ 1, and the proof ends. Remark 4.1 From Proposition 4.1 and Proposition 4.3 it follows that the stochastic observability for the system (2.1)-(2.2) may be checked by using a numerical procedure to verify the positivity of the solution X (t) through an enough long interval of time. Proposition 4.4 The triple (C, A; Q) is observable if and only if does not exist τ>,i D and x = such that E [ y(t,,x ) 2 η() = i ] = ( ) t [,τ] where y(t,,x )=C (η(t))x(t,,x ), x(t,,x ) being the solution of (2.1) having the initial condition x(,,x )=x. The next result provide a sufficient condition assuring the stochastic observability. Its proof may be found in [1]. Proposition 4.5 The triple (C, A; Q) is observable if for every i D,rank M(i) =n, where M(i) =[C(i),A (i)c(i),...,(a (i)) n 1 C(i), q i1 C(1),...,q id C(d),A 1(i)C(i),...,A r(i)c(i)]. In the following examples, the stochastic observability used in this paper is compared with other types of stochastic observability, for example the one introduced in [6] and [8]. We also 6
7 show that the stochastic observability used in this paper doesn t imply always the stochastic detectability as we would expected. Example 1 The case of a system subjected to Markovian jumping with d =2,n=2,p=1. Take [ ] α A (1) = A (2) = α [ ] q q C (1) =[1 ],C (2) =[ 1],Q=,α R,q >. q q It is obvious that the pairs (C (1),A (1)), (C (2),A (2)) are not observable in the deterministic sense. Therefore this system is not stochastically observable, in the sense of [8, 6]. We shall show that this system is stochastically observable in the sense of Definition 2.1. To this end we use the implication (iii) = (i) inproposition 4.1. We show that there exists τ > such that X 1 (τ) >,X 2 (τ) >, where X i (t),i =1, 2isthe solution of the Cauchy problem: From the representation formula d dt X i(t) = A (i)x i (t)+x i (t)a (i)+ +C(i)C (i), X i () =, i =1, 2. (X 1 (t),x 2 (t)) = e L (t s) Cds 2 q ij X j (t) (4.16) it follows that X i (t) for all t. Therefore it is sufficient to show that there exists τ>such that detx i (τ) >. ( ) xi (t) y Set X i (t) = i (t),i=1, 2. After some simple cmputations we have detx i (t) = y i (t) z i (t) x i (t)z i (t) yi 2 (t) =x i (t)z i (t) = x(t) z(t),t where x(t) = 1 2 [e αs + e (2α q)s ]ds z(t) = 1 [e 2αs e 2(α q)s ]ds. 2 For detail see [4]. It is easy to see that for every α R,q >wehave lim t x(t) z(t) >. Remark 4.2 Let us consider the system of type (2.1)-(2.2) with n =2,d=2,p=1,r =1 and A (1) = A (2) = αi 2,C (1) = (1 ),C (2) = ( 1),A 1 (i) a2 2 arbitrary matrix, 7 j=1
8 [ q q Q = q q ],α R,q >. Combining the conclusion of Example 1 with Proposition 4.2 (ii) it follows that the system (C, (A,A 1 ); Q) isobservable. Example 2 The stochastic observability does not imply always stochastic detectability. Let us consider the system subjected to Markovian jumping with d =2,n=2,p=1, A (1) = A (2) = q [ q q 2 I 2,C (1) =[1 ],C (2) =[ 1],Q= q q ]. (4.17) From the previous example we conclude that the system (C,A ; Q) isobservable. Invoking (i) (ii) from Proposition 3.2 we deduce that if the system (4.4) would be stochastically [ ] γ1 (i) detectable, then would exist the matrices X(i) > and Γ(i) =,i =1, 2 which γ 2 (i) verify the following system of linear equations: A (i)x(i)+x(i)a (i)+γ(i)c (i)+c (i)γ (i)+ 2 q ij X(j)+I 2 =,i=1, 2 [ ] 2γ1 (1) γ which implies I (1) < which is a contradiction. γ 2 (1) Example 3 Let us consider the stochastic system (2.1)-(2.2) with n = 2, d = 2, r = 1, p =1, A (1) = A (2) = αi 2, C (1) =[1 ], C (2) =[ 1], A 1 (1) = βi 2, A 1 (2) is a [ ] q q 2 2 arbitrary matrix, Q =,α R, β R, q> which satisfy 2α q+β 2 =. q q From Remark 4.2 it follows that the above system is stochastically observable. As in the previous example based on equivalence (i) (ii) from Proposition 3.2 we may show that it is not stochastically detectable (see citepreprint1 for details). The following two results show that the stochastic observability defined in this paper guarantees the positivity of the observability gramian and the fact that all semipositive solutions of the Riccati equations are stabilizing solutions as in the deterministic case. The proofs are omitted for shortness and may be found in [1]. Proposition 4.6 Assume that (C,A,...,A r ; Q) is observable and the algebraic equation on Sn d j=1 L X + C = (4.18) has a solution X. Then : (i) The system (A,A 1,...,A r ; Q) is stable. (ii) X >. (iii) The equation (4.5) has a unique positive semidefinite solution. 8
9 Let us consider the following system of general algebraic Riccati equations: A (i)x(i)+x(i)a (i)+ dq ij X(j) (X(i)B (i)+ A K(i)X(i)A k (i)+ j=1 A k(i)x(i)b k (i))(d (i)d(i)+ Bk(i)X(i)B k (i)) 1 (B(i)X(i)+ Bk(i)X(i)A k (i)) + C(i)C (i) = (4.19) where A k (i),c (i) are as in (2.1)-(2.2) and B k (i),d(i) are n m and p m given matrices. Such algebraic equations are closly related to the linear quadratic optimization problem for a linear stochastic system subjected both to Markovian jumping and multiplicative white noise (see [4, 1] for details). The following result is proved in [1]. Proposition 4.7 If (C, A; Q) is observable then any semipositive solution of the system (4.6) is a stabilizing solution. References [1] B.D.O.Anderson, J.B.Moore, New results in linear system stability, SIAM j. Control, vol. 7, no. 3, pp , [2] J.L.Doob, Stochastic processes, Wiley, New-York, [3] V. Dragan, T. Morozan, Stability and robust stabilization to linear stochastic systems described by differential equations with markovian jumping and multiplicative white noise, Stochastic Analysis and Application, nr. 1, 22 (to appear). [4] V. Dragan, T. Morozan, The linear quadratic optimization problem and tracking problem for a class of linear stochastic systems with multiplicative white noise and Markovian jumping, Preprint nr. 3/ 21, Institute of Mathematics od Romanian Academy. [5] A. Friedman, Stochastic differential equations and applications, Academic Press, vol. I, [6] Y. Ji, H.J.Chizeck, Controllability, stabilizability, and continuous-time Markovian jump linear quadratic control, IEEE Trans. Auto. Control, 35, no.7, pp , 199. [7] R. Kalman, Contributions on the theory of optimal control, Buletin de la Sociedad Math. Mexicana, Segunda Serie, 5, 1, pp ,
10 [8] T. Morozan, Optimal stationary control for dynamic systems with Markov perturbations, Stochastic Analysis Appl., 1, pp , [9] T. Morozan, Stochastic uniform observability and Riccati equations of stochastic control, Revue Roum. Math. Pures et Appl., 9, [1] T. Morozan, Linear quadratic control and tracking problems for time-varying stochastic differential systems perturbed by a Markov chain, Revue Roum. Math. Pure et Applique, to appear. [11] B. Oksendal, Stochastic differential equations, Springer, Fifth Edition, 2. [12] W.H. Wonham, On a matrix Riccati equation of stochastic control, SIAM J. Control Optim., 6, pp , [13] W.H. Wonham, Random diferential equations in control theory, Probabilistic Methods in Applied Math.,2, (A.T.Barucha-Reid, Ed.) Academic Press, New-York, pp ,
Generalized Riccati Equations Arising in Stochastic Games
Generalized Riccati Equations Arising in Stochastic Games Michael McAsey a a Department of Mathematics Bradley University Peoria IL 61625 USA mcasey@bradley.edu Libin Mou b b Department of Mathematics
More informationOn the linear quadratic optimization problems and associated Riccati equations for systems modeled by Ito linear differential equations
Innovativity in Modeling and Analytics Journal of Research vol. 1, 216, pp.13-33, http://imajor.info/ c imajor ISSN XXXX-XXXX On the linear quadratic optimization problems and associated Riccati equations
More informationOn matrix equations X ± A X 2 A = I
Linear Algebra and its Applications 326 21 27 44 www.elsevier.com/locate/laa On matrix equations X ± A X 2 A = I I.G. Ivanov,V.I.Hasanov,B.V.Minchev Faculty of Mathematics and Informatics, Shoumen University,
More informationSome recent results on controllability of coupled parabolic systems: Towards a Kalman condition
Some recent results on controllability of coupled parabolic systems: Towards a Kalman condition F. Ammar Khodja Clermont-Ferrand, June 2011 GOAL: 1 Show the important differences between scalar and non
More informationCONTINUITY OF SOLUTIONS OF RICCATI EQUATIONS FOR THE DISCRETE TIME JLQP
Int. J. Appl. Math. Comput. Sci., 2002, Vol.12, No.4, 539 543 CONTINUITY O SOLUTIONS O RICCATI EQUATIONS OR THE DISCRETE TIME JLQP ADAM CZORNIK, ANDRZEJ ŚWIERNIAK Department of Automatic Control Silesian
More informationLinear Filtering of general Gaussian processes
Linear Filtering of general Gaussian processes Vít Kubelka Advisor: prof. RNDr. Bohdan Maslowski, DrSc. Robust 2018 Department of Probability and Statistics Faculty of Mathematics and Physics Charles University
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationSEMI-INNER PRODUCTS AND THE NUMERICAL RADIUS OF BOUNDED LINEAR OPERATORS IN HILBERT SPACES
SEMI-INNER PRODUCTS AND THE NUMERICAL RADIUS OF BOUNDED LINEAR OPERATORS IN HILBERT SPACES S.S. DRAGOMIR Abstract. The main aim of this paper is to establish some connections that exist between the numerical
More informationON ADAPTIVE CONTROL FOR THE CONTINUOUS TIME-VARYING JLQG PROBLEM
Int. J. Appl. Math. Comput. Sci., 25, Vol. 15, No. 1, 53 62 B ON ADAPTIVE CONTROL FOR THE CONTINUOUS TIME-VARYING JLQG PROBLEM ADAM CZORNIK, ANDRZEJ ŚWIERNIAK Department of Automatic Control Silesian University
More informationOn continuous time contract theory
Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca
More informationSPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction
MATEMATIČKI VESNIK MATEMATIQKI VESNIK 69, 1 (2017), 23 38 March 2017 research paper originalni nauqni rad FIXED POINT RESULTS FOR (ϕ, ψ)-contractions IN METRIC SPACES ENDOWED WITH A GRAPH AND APPLICATIONS
More informationA Concise Course on Stochastic Partial Differential Equations
A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original
More informationDelay-dependent Stability Analysis for Markovian Jump Systems with Interval Time-varying-delays
International Journal of Automation and Computing 7(2), May 2010, 224-229 DOI: 10.1007/s11633-010-0224-2 Delay-dependent Stability Analysis for Markovian Jump Systems with Interval Time-varying-delays
More informationVerona Course April Lecture 1. Review of probability
Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is
More informationODE Final exam - Solutions
ODE Final exam - Solutions May 3, 018 1 Computational questions (30 For all the following ODE s with given initial condition, find the expression of the solution as a function of the time variable t You
More informationControl, Stabilization and Numerics for Partial Differential Equations
Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua
More informationApplied Mathematics Letters. Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system
Applied Mathematics Letters 5 (1) 198 1985 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Stationary distribution, ergodicity
More informationGramians based model reduction for hybrid switched systems
Gramians based model reduction for hybrid switched systems Y. Chahlaoui Younes.Chahlaoui@manchester.ac.uk Centre for Interdisciplinary Computational and Dynamical Analysis (CICADA) School of Mathematics
More informationDeterministic Dynamic Programming
Deterministic Dynamic Programming 1 Value Function Consider the following optimal control problem in Mayer s form: V (t 0, x 0 ) = inf u U J(t 1, x(t 1 )) (1) subject to ẋ(t) = f(t, x(t), u(t)), x(t 0
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationMARKOVIANITY OF A SUBSET OF COMPONENTS OF A MARKOV PROCESS
MARKOVIANITY OF A SUBSET OF COMPONENTS OF A MARKOV PROCESS P. I. Kitsul Department of Mathematics and Statistics Minnesota State University, Mankato, MN, USA R. S. Liptser Department of Electrical Engeneering
More information1. Find the solution of the following uncontrolled linear system. 2 α 1 1
Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +
More informationA NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES
A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection
More information6.241 Dynamic Systems and Control
6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May
More informationA note on linear differential equations with periodic coefficients.
A note on linear differential equations with periodic coefficients. Maite Grau (1) and Daniel Peralta-Salas (2) (1) Departament de Matemàtica. Universitat de Lleida. Avda. Jaume II, 69. 251 Lleida, Spain.
More informationA probabilistic proof of Perron s theorem arxiv: v1 [math.pr] 16 Jan 2018
A probabilistic proof of Perron s theorem arxiv:80.05252v [math.pr] 6 Jan 208 Raphaël Cerf DMA, École Normale Supérieure January 7, 208 Abstract Joseba Dalmau CMAP, Ecole Polytechnique We present an alternative
More informationOPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN
Dynamic Systems and Applications 16 (2007) 393-406 OPTIMAL FUSION OF SENSOR DATA FOR DISCRETE KALMAN FILTERING Z. G. FENG, K. L. TEO, N. U. AHMED, Y. ZHAO, AND W. Y. YAN College of Mathematics and Computer
More informationOn Convergence of Nonlinear Active Disturbance Rejection for SISO Systems
On Convergence of Nonlinear Active Disturbance Rejection for SISO Systems Bao-Zhu Guo 1, Zhi-Liang Zhao 2, 1 Academy of Mathematics and Systems Science, Academia Sinica, Beijing, 100190, China E-mail:
More informationObserver design for a general class of triangular systems
1st International Symposium on Mathematical Theory of Networks and Systems July 7-11, 014. Observer design for a general class of triangular systems Dimitris Boskos 1 John Tsinias Abstract The paper deals
More informationUNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE
Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.
More informationChapter 2 Event-Triggered Sampling
Chapter Event-Triggered Sampling In this chapter, some general ideas and basic results on event-triggered sampling are introduced. The process considered is described by a first-order stochastic differential
More informationChapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control
Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design
More informationGiaccardi s Inequality for Convex-Concave Antisymmetric Functions and Applications
Southeast Asian Bulletin of Mathematics 01) 36: 863 874 Southeast Asian Bulletin of Mathematics c SEAMS 01 Giaccardi s Inequality for Convex-Concave Antisymmetric Functions and Applications J Pečarić Abdus
More informationLQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin
LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system
More informationw T 1 w T 2. w T n 0 if i j 1 if i = j
Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -
More informationStrong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term
1 Strong uniqueness for stochastic evolution equations with possibly unbounded measurable drift term Enrico Priola Torino (Italy) Joint work with G. Da Prato, F. Flandoli and M. Röckner Stochastic Processes
More informationEL2520 Control Theory and Practice
EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller
More informationStability of Stochastic Differential Equations
Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010
More informationFinal Exam Solutions
EE55: Linear Systems Final Exam SIST, ShanghaiTech Final Exam Solutions Course: Linear Systems Teacher: Prof. Boris Houska Duration: 85min YOUR NAME: (type in English letters) I Introduction This exam
More informationSDE Coefficients. March 4, 2008
SDE Coefficients March 4, 2008 The following is a summary of GARD sections 3.3 and 6., mainly as an overview of the two main approaches to creating a SDE model. Stochastic Differential Equations (SDE)
More informationBlock companion matrices, discrete-time block diagonal stability and polynomial matrices
Block companion matrices, discrete-time block diagonal stability and polynomial matrices Harald K. Wimmer Mathematisches Institut Universität Würzburg D-97074 Würzburg Germany October 25, 2008 Abstract
More informationStochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno
Stochastic Processes M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno 1 Outline Stochastic (random) processes. Autocorrelation. Crosscorrelation. Spectral density function.
More informationHarnack Inequalities and Applications for Stochastic Equations
p. 1/32 Harnack Inequalities and Applications for Stochastic Equations PhD Thesis Defense Shun-Xiang Ouyang Under the Supervision of Prof. Michael Röckner & Prof. Feng-Yu Wang March 6, 29 p. 2/32 Outline
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationStochastic Stability of Jump Linear Systems
1204 IEEE TRASACTIOS O AUTOMATIC COTROL, VOL. 47, O. 7, JULY 2002 III. COCLUDIG REMARKS A continuous-time observer, which converges in finite time, is accomplished by using the redundancy of two standard
More informationTrajectory Tracking Control of Bimodal Piecewise Affine Systems
25 American Control Conference June 8-1, 25. Portland, OR, USA ThB17.4 Trajectory Tracking Control of Bimodal Piecewise Affine Systems Kazunori Sakurama, Toshiharu Sugie and Kazushi Nakano Abstract This
More informationMatrix Solutions to Linear Systems of ODEs
Matrix Solutions to Linear Systems of ODEs James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 3, 216 Outline 1 Symmetric Systems of
More informationON THE MOMENTS OF ITERATED TAIL
ON THE MOMENTS OF ITERATED TAIL RADU PĂLTĂNEA and GHEORGHIŢĂ ZBĂGANU The classical distribution in ruins theory has the property that the sequence of the first moment of the iterated tails is convergent
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationOn Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA
On Stochastic Adaptive Control & its Applications Bozenna Pasik-Duncan University of Kansas, USA ASEAS Workshop, AFOSR, 23-24 March, 2009 1. Motivation: Work in the 1970's 2. Adaptive Control of Continuous
More information14 - Gaussian Stochastic Processes
14-1 Gaussian Stochastic Processes S. Lall, Stanford 211.2.24.1 14 - Gaussian Stochastic Processes Linear systems driven by IID noise Evolution of mean and covariance Example: mass-spring system Steady-state
More informationAN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)
NEW ZEALAND JOURNAL OF MATHEMATICS Volume 46 (2016), 53-64 AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE Mona Nabiei (Received 23 June, 2015) Abstract. This study first defines a new metric with
More informationOn Controllability of Linear Systems 1
On Controllability of Linear Systems 1 M.T.Nair Department of Mathematics, IIT Madras Abstract In this article we discuss some issues related to the observability and controllability of linear systems.
More informationCOMPUTER-AIDED MODELING AND SIMULATION OF ELECTRICAL CIRCUITS WITH α-stable NOISE
APPLICATIONES MATHEMATICAE 23,1(1995), pp. 83 93 A. WERON(Wroc law) COMPUTER-AIDED MODELING AND SIMULATION OF ELECTRICAL CIRCUITS WITH α-stable NOISE Abstract. The aim of this paper is to demonstrate how
More informationEXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS
International Journal of Differential Equations and Applications Volume 7 No. 1 23, 11-17 EXISTENCE OF NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS WITH ABSTRACT VOLTERRA OPERATORS Zephyrinus C.
More informationCONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Stochastic Stability - H.J. Kushner
STOCHASTIC STABILITY H.J. Kushner Applied Mathematics, Brown University, Providence, RI, USA. Keywords: stability, stochastic stability, random perturbations, Markov systems, robustness, perturbed systems,
More informationA note on adiabatic theorem for Markov chains
Yevgeniy Kovchegov Abstract We state and prove a version of an adiabatic theorem for Markov chains using well known facts about mixing times. We extend the result to the case of continuous time Markov
More informationStochastic Shortest Path Problems
Chapter 8 Stochastic Shortest Path Problems 1 In this chapter, we study a stochastic version of the shortest path problem of chapter 2, where only probabilities of transitions along different arcs can
More informationChap. 3. Controlled Systems, Controllability
Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :
More informationChapter Introduction. A. Bensoussan
Chapter 2 EXPLICIT SOLUTIONS OFLINEAR QUADRATIC DIFFERENTIAL GAMES A. Bensoussan International Center for Decision and Risk Analysis University of Texas at Dallas School of Management, P.O. Box 830688
More informationThe Important State Coordinates of a Nonlinear System
The Important State Coordinates of a Nonlinear System Arthur J. Krener 1 University of California, Davis, CA and Naval Postgraduate School, Monterey, CA ajkrener@ucdavis.edu Summary. We offer an alternative
More informationLinear Hyperbolic Systems
Linear Hyperbolic Systems Professor Dr E F Toro Laboratory of Applied Mathematics University of Trento, Italy eleuterio.toro@unitn.it http://www.ing.unitn.it/toro October 8, 2014 1 / 56 We study some basic
More informationCONTROLLABILITY OF NONLINEAR SYSTEMS WITH DELAYS IN BOTH STATE AND CONTROL VARIABLES
KYBERNETIKA VOLUME 22 (1986), NUMBER 4 CONTROLLABILITY OF NONLINEAR SYSTEMS WITH DELAYS IN BOTH STATE AND CONTROL VARIABLES K. BALACHANDRAN Relative controllability of nonlinear systems with distributed
More informationStability of an abstract wave equation with delay and a Kelvin Voigt damping
Stability of an abstract wave equation with delay and a Kelvin Voigt damping University of Monastir/UPSAY/LMV-UVSQ Joint work with Serge Nicaise and Cristina Pignotti Outline 1 Problem The idea Stability
More informationELA
POSITIVE DEFINITE SOLUTION OF THE MATRIX EQUATION X = Q+A H (I X C) δ A GUOZHU YAO, ANPING LIAO, AND XUEFENG DUAN Abstract. We consider the nonlinear matrix equation X = Q+A H (I X C) δ A (0 < δ 1), where
More informationOn an Output Feedback Finite-Time Stabilization Problem
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 46, NO., FEBRUARY 35 [] E. K. Boukas, Control of systems with controlled jump Markov disturbances, Contr. Theory Adv. Technol., vol. 9, pp. 577 595, 993. [3],
More informationON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM
ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,
More informationHigher Order Averaging : periodic solutions, linear systems and an application
Higher Order Averaging : periodic solutions, linear systems and an application Hartono and A.H.P. van der Burgh Faculty of Information Technology and Systems, Department of Applied Mathematical Analysis,
More informationA Delay-dependent Condition for the Exponential Stability of Switched Linear Systems with Time-varying Delay
A Delay-dependent Condition for the Exponential Stability of Switched Linear Systems with Time-varying Delay Kreangkri Ratchagit Department of Mathematics Faculty of Science Maejo University Chiang Mai
More informationResearch Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming
Mathematical Problems in Engineering Volume 2012, Article ID 674087, 14 pages doi:10.1155/2012/674087 Research Article Indefinite LQ Control for Discrete-Time Stochastic Systems via Semidefinite Programming
More informationMath Ordinary Differential Equations
Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x
More informationfor all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true
3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO
More informationSwitching, sparse and averaged control
Switching, sparse and averaged control Enrique Zuazua Ikerbasque & BCAM Basque Center for Applied Mathematics Bilbao - Basque Country- Spain zuazua@bcamath.org http://www.bcamath.org/zuazua/ WG-BCAM, February
More informationResearch Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching
Discrete Dynamics in Nature and Society Volume 211, Article ID 549651, 12 pages doi:1.1155/211/549651 Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting
More informationAsymptotics for posterior hazards
Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and
More informationLinear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems
Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form
More informationStochastic modelling of epidemic spread
Stochastic modelling of epidemic spread Julien Arino Department of Mathematics University of Manitoba Winnipeg Julien Arino@umanitoba.ca 19 May 2012 1 Introduction 2 Stochastic processes 3 The SIS model
More informationLinear Algebra Solutions 1
Math Camp 1 Do the following: Linear Algebra Solutions 1 1. Let A = and B = 3 8 5 A B = 3 5 9 A + B = 9 11 14 4 AB = 69 3 16 BA = 1 4 ( 1 3. Let v = and u = 5 uv = 13 u v = 13 v u = 13 Math Camp 1 ( 7
More informationBrownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion
Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of
More informationNoncooperative continuous-time Markov games
Morfismos, Vol. 9, No. 1, 2005, pp. 39 54 Noncooperative continuous-time Markov games Héctor Jasso-Fuentes Abstract This work concerns noncooperative continuous-time Markov games with Polish state and
More informationROBUST NULL CONTROLLABILITY FOR HEAT EQUATIONS WITH UNKNOWN SWITCHING CONTROL MODE
DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS Volume 34, Number 10, October 014 doi:10.3934/dcds.014.34.xx pp. X XX ROBUST NULL CONTROLLABILITY FOR HEAT EQUATIONS WITH UNKNOWN SWITCHING CONTROL MODE Qi Lü
More informationProperties of an infinite dimensional EDS system : the Muller s ratchet
Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population
More informationSTABILIZATION OF LINEAR SYSTEMS VIA DELAYED STATE FEEDBACK CONTROLLER. El-Kébir Boukas. N. K. M Sirdi. Received December 2007; accepted February 2008
ICIC Express Letters ICIC International c 28 ISSN 1881-83X Volume 2, Number 1, March 28 pp. 1 6 STABILIZATION OF LINEAR SYSTEMS VIA DELAYED STATE FEEDBACK CONTROLLER El-Kébir Boukas Department of Mechanical
More informationASYMPTOTIC STABILITY IN THE DISTRIBUTION OF NONLINEAR STOCHASTIC SYSTEMS WITH SEMI-MARKOVIAN SWITCHING
ANZIAM J. 49(2007), 231 241 ASYMPTOTIC STABILITY IN THE DISTRIBUTION OF NONLINEAR STOCHASTIC SYSTEMS WITH SEMI-MARKOVIAN SWITCHING ZHENTING HOU 1, HAILING DONG 1 and PENG SHI 2 (Received March 17, 2007;
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More informationLinear Differential Equations. Problems
Chapter 1 Linear Differential Equations. Problems 1.1 Introduction 1.1.1 Show that the function ϕ : R R, given by the expression ϕ(t) = 2e 3t for all t R, is a solution of the Initial Value Problem x =
More informationRisk-Averse Control of Partially Observable Markov Systems. Andrzej Ruszczyński. Workshop on Dynamic Multivariate Programming Vienna, March 2018
Risk-Averse Control of Partially Observable Markov Systems Workshop on Dynamic Multivariate Programming Vienna, March 2018 Partially Observable Discrete-Time Models Markov Process: fx t ; Y t g td1;:::;t
More informationAn Introduction to Malliavin calculus and its applications
An Introduction to Malliavin calculus and its applications Lecture 3: Clark-Ocone formula David Nualart Department of Mathematics Kansas University University of Wyoming Summer School 214 David Nualart
More informationPositive Markov Jump Linear Systems (PMJLS) with applications
Positive Markov Jump Linear Systems (PMJLS) with applications P. Bolzern, P. Colaneri DEIB, Politecnico di Milano - Italy December 12, 2015 Summary Positive Markov Jump Linear Systems Mean stability Input-output
More informationSystemic Risk and Stochastic Games with Delay
Systemic Risk and Stochastic Games with Delay Jean-Pierre Fouque (with René Carmona, Mostafa Mousavi and Li-Hsien Sun) PDE and Probability Methods for Interactions Sophia Antipolis (France) - March 30-31,
More informationLocal null controllability of the N-dimensional Navier-Stokes system with N-1 scalar controls in an arbitrary control domain
Local null controllability of the N-dimensional Navier-Stokes system with N-1 scalar controls in an arbitrary control domain Nicolás Carreño Université Pierre et Marie Curie-Paris 6 UMR 7598 Laboratoire
More informationMEAN SQUARE SOLUTIONS OF SECOND-ORDER RANDOM DIFFERENTIAL EQUATIONS BY USING HOMOTOPY ANALYSIS METHOD
(c) Romanian RRP 65(No. Reports in 2) Physics, 350 362 Vol. 2013 65, No. 2, P. 350 362, 2013 MEAN SQUARE SOLUTIONS OF SECOND-ORDER RANDOM DIFFERENTIAL EQUATIONS BY USING HOMOTOPY ANALYSIS METHOD ALIREZA
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationLecture: Adaptive Filtering
ECE 830 Spring 2013 Statistical Signal Processing instructors: K. Jamieson and R. Nowak Lecture: Adaptive Filtering Adaptive filters are commonly used for online filtering of signals. The goal is to estimate
More informationLINEAR SYSTEMS OF GENERALIZED ORDINARY DIFFERENTIAL EQUATIONS
Georgian Mathematical Journal Volume 8 21), Number 4, 645 664 ON THE ξ-exponentially ASYMPTOTIC STABILITY OF LINEAR SYSTEMS OF GENERALIZED ORDINARY DIFFERENTIAL EQUATIONS M. ASHORDIA AND N. KEKELIA Abstract.
More informationLecture 4: Numerical solution of ordinary differential equations
Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor
More informationQuitting games - An Example
Quitting games - An Example E. Solan 1 and N. Vieille 2 January 22, 2001 Abstract Quitting games are n-player sequential games in which, at any stage, each player has the choice between continuing and
More informationDeterministic Kalman Filtering on Semi-infinite Interval
Deterministic Kalman Filtering on Semi-infinite Interval L. Faybusovich and T. Mouktonglang Abstract We relate a deterministic Kalman filter on semi-infinite interval to linear-quadratic tracking control
More informationStability Theory for Nonnegative and Compartmental Dynamical Systems with Time Delay
1 Stability Theory for Nonnegative and Compartmental Dynamical Systems with Time Delay Wassim M. Haddad and VijaySekhar Chellaboina School of Aerospace Engineering, Georgia Institute of Technology, Atlanta,
More information