c 2009 Society for Industrial and Applied Mathematics

Size: px
Start display at page:

Download "c 2009 Society for Industrial and Applied Mathematics"

Transcription

1 SIAM J. CONTROL OPTIM. Vol. 48, No. 2, pp c 29 Society for Industrial and Applied Mathematics UNKNOWN INPUT AND STATE ESTIMATION FOR UNOBSERVABLE SYSTEMS FRANCISCO J. BEJARANO, LEONID FRIDMAN, AND ALEXANDER POZNYAK Abstract. The concept of strong detectability and its relation with the concept of invariant zeros is reviewed. For strongly detectable systems (which includes the strongly observable systems), it is proposed a hierarchical design of a robust observer whose trajectories converge to those of the original state vector. Furthermore, it is shown that neither left invertibility is a sufficient condition nor strong detectability is a necessary condition to estimate the unknown inputs. It is shown that the necessary and sufficient condition for estimating the unknown inputs is that the set of the invariant zeros that do not belong to the set of unobservable modes be within the interior of the left half plane of the complex space. This shows that the unknown inputs could be estimated even if it is impossible to estimate the entire state vector of the system. Two numerical examples illustrate the effectiveness of the proposed estimation schemes. Key words. unknown input estimation, strong detectability, sliding mode observer AMS subject classifications. 93C41, 93B7, 93B51 DOI / Introduction Antecedents. The problem of state observation for systems with unknown inputs has been extensively studied in the last two decades. Usually, the design of observers requires the system to have relative degree one with respect to the unknown inputs (see, e.g., 16 and 1). Within variable structure theory, the problems of state observation and unknown input estimation have been actively developed using the sliding mode approach (see, for example, the corresponding chapters in the textbooks 11, 27 and the recent tutorials 3, 1, 22). But generally they were developed for systems which satisfy the necessary and sufficient conditions to estimate the entire state vector without differentiation of the output (i.e., for the systems with relative degree one w.r.t. the unknown inputs) 16. It turns out that the previously mentioned conditions are not satisfied for the state observation of a mechanical system with sensors measuring only the position of the elements of the system 9. To overcome the restriction of relative degree one w.r.t. the unknown inputs, an idea was suggested: to transform the system into a triangular form and use a step-by-step sliding mode observer based on the successive reconstruction of each element of the transformed state vector (see, e.g., 15, 27, 1, and 14). However, the design of those observers is restricted to the fulfilment of a specific relative degree condition (12). The essence of the observers that use the triangular form is to recover information from the derivatives of the output of the system which are not affected by the unknown inputs. Such derivatives can be estimated via a second-order sliding Received by the editors August 16, 27; accepted for publication (in revised form) December 3, 28; published electronically March 18, 29. Results of this manuscript were presented in the European Control Conference 27, Kos, Greece. Department of Control, Division of Electrical Engineering, National Autonomous University of Mexico (UNAM), C.P. 451, México, D.F. (javbejarano@yahoo.com.mx, lfridman@servidor.unam. mx). Departamento de Control Automático, CINVESTAV-IPN, A.P , C.P. 7 México D.F. (apoznyak@ctrl.cinvestav.mx). 1155

2 1156 F.J. BEJARANO, L. FRIDMAN, AND A. POZNYAK mode technique, specifically by the super-twisting algorithm. In the last two decades some second-order sliding-mode algorithms have been designed (see, e.g., 2, 4, 24, 5, and 21). The super-twisting technique is a second-order sliding mode that keeps the advantages of classic sliding mode, and further the super-twisting algorithm can be used as a robust exact differentiator 18, 19. It is used here for the state and unknown estimation Motivation. It was shown in 7, 8 that strong observability condition (absence of invariant zeros) is necessary and sufficient for the reconstruction in finite time of the state vector. Regarding the observation problem, in this paper we suggest a scheme of design which relaxes the strong observability condition, even when the convergence of the observation error to the zero point becomes asymptotic. On the other hand, usually the estimation of the unknown inputs requires first to estimate the entire state vector (see, e.g., 25, 23, 13); however, estimating the entire state vector, as we shall see below, requires the system to be at least strongly detectable (equivalently, that the set of the invariant zeros belongs to the interior of the left half plane of the complex space). Here, we show that in the general case, for the unknown input estimation, the strong detectability condition can be relaxed Main contributions. Regarding the unknown input reconstruction, the main contributions of this paper are: Necessary and sufficient structural conditions for the unknown input estimation have been found. Namely, the estimation of the unknown inputs can be carried out if the set of the invariant zeros of the system (for the known control input equal to zero) that do not belong to the set of unobservable eigenvalues is within the interior of the left half plane of the complex space. The structural conditions under which the unknown inputs can be reconstructed exactly in a finite time are given. A scheme for the estimation (reconstruction) of the unknown inputs is suggested, which is based on the decomposition of the system into three subsystems. This allows one to estimate the states of the first two subsystem, which is enough for the unknown input estimation (reconstruction). Combining the structural conditions obtained in this paper and the conditions given in 16 for state estimation, it is shown that, under more restrictive conditions, the unknown inputs could be estimated without estimating the entire state vector and without using any derivative of the system output. If the system is not strongly observable, it is impossible to design a standard differential observer providing state estimation. Because of that, we proposed another approach related to the designing of an algebraic-type observer which can successfully work for both strongly observable and nonstrongly observable, but strongly detectable systems. Therefore, concerning the state estimate, the main contributions are: Decomposition of the system into two subsystems. The first one is strongly observable for the null known control input and the second one is expected to be detectable. Design of an observer for the state vector of the first subsystem by applying the second-order hierarchical observation scheme Structure of the paper. The manuscript is structured in the following manner. In section 2 we outline the problem statement. Section 3 is devoted to some preliminaries dealing mainly with the concepts of strong observability and strong detectability. In the same section, we present the main idea for estimating the state

3 UNKNOWN INPUT AND STATE ESTIMATION 1157 vector. Necessary and sufficient conditions under which the unknown inputs can be estimated are given in section 4. Section 5 deals with the design of the observer of the state vector. In section 6, an algorithm for the estimation of the unknown inputs is suggested. Some simulations are depicted in section 7, which illustrate the scheme of design proposed in the paper. The proofs of propositions, lemmas, and theorems are given in the appendix Notation. We use the following notation. Let G R n m be a matrix. We define G + as the pseudoinverse of G. Thus, if rankg = n, GG + = I, andif rank G = m, G + G = I. ForJ R n m with rank J = r, we define J R n r n with rank J = n r as a matrix achieving J J =;andj R r n with rank J = r as a matrix such that J (J ) T =. Noticethatdet J J, and also that J J R r m and rank(j J)=r. C := {s C :Res<}. 2. Problem formulation. Let us consider the following system affected by unknown inputs: 1 (2.1) ẋ (t) =Ax (t)+bu (t)+dw (t),x() = x y (t) =Cx(t)+Fw(t), t The vector x (t) R n is the state vector, u (t) R m is the control, y (t) R p is the output of the system, w (t) R q represents the unknown input vector, which is bounded, i.e., w (t) w + <. The matrices A R n n, B R n m, C R p n, D R n q,andf R p q are known constants. The pair {u (t),y(t)} is assumed to be measurable (available) at any time t. The current states x (t) aswellasthe initial state x are not available. Without the loss of generality we assume that D rank = q. F Problems statement: In this paper we would like to discuss the following problems for the system (2.1): (a) the estimation of x (t) based on the available information {u (τ),y(τ)} τ,t, (b) the estimation of w (t) based on the available information {u (τ),y(τ)} τ,t. 3. Preliminaries. Defining ẋ c = Ax c (t)+bu we have that the dynamic equation for x e := x x c is given by ẋ e (t) =Ax e (t) +Dw (t) with the output y e := y Cx c = Cx e (t)+fw(t). Thus, the estimation of x is equivalent to the estimation of x e since x = x e + x c. It means that for the observation problem the control u does not play any role. Therefore, without the loss of generality it will be assumed throughout this section and the next one that u. Let Σ := (A, C, D, F ) be the fourfold of matrices associated to the dynamic system that is governed by the equations (3.1) ẋ (t) =Ax (t)+dw (t), x() = x y (t) =Cx(t)+Fw(t), t 3.1. Strong observability. We recall some definitions corresponding to properties of Σ and its associated dynamic system (3.1) (see, e.g., 16, 26). 1 It can be done an extension to the case of nonlinear systems considered in 14.

4 1158 F.J. BEJARANO, L. FRIDMAN, AND A. POZNYAK Definition 3.1. The system (3.1) is called strongly observable if, for all initial condition x and for all unknown input w (t), the identity y (t) =for all t implies that x (t) =for all t. Definition 3.2. V Σ is a null-output (A, D) invariant subspace if for every x V there exists a w such that (Ax + Dw) V Σ and (Cx + Fw)=. VΣ is the maximal null-output (A, D) invariant subspace; i.e., for every V Σ we have V Σ VΣ. The subspace VΣ is called the weakly unobservable subspace of Σ. Definition 3.3. s C is an invariant zero of Σ if (3.2) rank P D (s ) <n+rank F ; P (s ):= s I A D C F Fact 1. The following statements are equivalent (see, e.g., 16, 26): (i) The dynamic system (3.1), associated to Σ, is strongly observable; (ii) V Σ =; (iii) Σ has no invariant zeros Decomposition into the strongly and nonstrongly observable subsystems. Now, we will decompose the system into the strongly observable part and the nonstrongly observable part. With this aim, we will need a basis of V Σ. Next, we give a form to construct a basis for the subspace V Σ. Let the matrices M k,σ be defined recursively by. (3.3) M M k+1,σ = M k+1,σ k+1,σ, M 1,Σ = ( F C ) F C ( ) ( ) Mk,Σ A Mk,Σ D M k+1,σ = T k,σ, T C k,σ =. F Thus, M k+1,σ has full row rank. 2 In 2 it was proven that (3.4) V Σ =kerm n,σ. Defining 3 n 1 := rank M n,σ,wehavethatm n,σ R n1 n. Now, with V R n n n1 being a matrix whose columns form a basis of VΣ, define the following nonsingular matrix Mn,Σ (3.5) P := V +, where V + R n n1 n. Hence, 4 P 1 = M + n,σ V, M + n,σ Rn n1. On the other hand, Definition 3.2 is equivalent to the fulfilment of the following pair of algebraic equations (3.6) AV + DK = VQ, CV + FK = 2 According to the notation given in 1.5, the matrix M k+1,σ has full row rank and rank(m k+1,σ )=rank( M k+1,σ )=rank( M k+1,σ ). At difference with the definition of M k+1 given in 7, here M k+1,σ always has full row rank. 3 It is easy to verify that rank M j+1 =rankm j implies rank M j+2 =rankm j. Therefore, to reduce the number of computations of the matrices M k, if rankm j+1 =rankm j, we can define M n = M n 1 = = M j. 4 Notice that M + n,σ = M n,σ T (M n,σmn,σ T ) 1 and V + =(V T V ) 1 V T. Therefore, M n,σ M + n,σ = I, V + V = I, M n,σ V =,V + M + n,σ =.

5 UNKNOWN INPUT AND STATE ESTIMATION 1159 for some matrices {K,Q}. It is clear that there exists a matrix K R m n such that { } {( AV + DK (3.7) = VQ ) } equivalent A + D K V = VQ CV + FK ( = ). C + F K V = Taking into account that V + V = I, it is easy to see that K = K V + satisfies (3.7). It also should be noticed that in general K is not unique. Let x be defined by x = Px with the partition x T = x T 1 x T 2,where x1 R n1 and x 2 R n n1. Thus, because of the manner in which P was defined, and from (3.7) and (3.4), the dynamics of x is governed by the equations x 1 (t) A1 x1 (t) D1 = + w (t) x 2 (t) A 2 A 4 x 2 (t) D 2 (3.8) y (t) =C 1 x 1 (t)+f w (t) w (t) =w (t) K P 1 x = w (t) K x 2 (t), where (3.9) A1 := P ( A + D A 2 A K ) P 1, 4 D1 := PD, C D 1 := ( C + F K ) M + n,σ. 2 Now, define Σ K,P := ( P ( A + D K ) P 1, ( C + F K ) P 1,PD,F ). From (3.4) and (3.7), it follows that (3.1) ker M n,σ K,P = V Σ K,P = P V Σ = P ker M n,σ. Lemma 3.4. Defining Σ :=(A 1,C 1,D 1,F), we have that the dynamic system associated to Σ is strongly observable; i.e., ker M n1, Σ =and Σ has no invariant zeros Strong detectability. The next definition can be found in 16 and 26. Definition 3.5. The system (3.1) is called strongly detectable if, for all initial condition x and for all unknown inputs w (t) providing the existence of solution in (3.1), the identity y (t) =for all t implies x (t) as t. Remark 1. It is clear that the strong detectability property is a necessary requirement for the asymptotic estimation of the state vector. As we will see later, strong detectability is also a sufficient condition for such a purpose. Remark 2. Evidently, the strong detectability condition is less restrictive than the strong observability condition. The following system is an example of a system that is not strongly observable, but it is strongly detectable. ẋ 1 (t) =x 1 (t)+w (t) ẋ 2 (t) =x 1 (t) x 2 (t) y (t) =x 1 (t) The following theorem relates the strong detectability with the invariant zeros. Theorem 3.6 (see 16). The system (3.1) is strongly detectable if, and only if, the set of the invariant zeros of Σ belongs to C.

6 116 F.J. BEJARANO, L. FRIDMAN, AND A. POZNYAK Now, using the notation (3.9), we are ready to give a characterization of the invariant zeros of Σ. Lemma 3.7. The invariant zeros of Σ:=(A, C, D, F ) are characterized by the following properties: (a) If rank D 1 F = q, the set of the invariant zeros of Σ and the set of eigenvalues of the matrix A 4 are identical. (b) If rank D 1 F <q, every s C is an invariant zero, where q is the number of unknown inputs, that is, w (t) R q. The following proposition can be found as an exercise on p. 17 of 26. Proposition 3.8. The system Σ=(A, C, D, F ) is strongly detectable if, and only if, the pair (A + DK, C + FK) is detectable for any K R q n Basic idea for the state estimation. In this part of the paper, we will give the basic procedure for the reconstruction of the state in the new coordinates Recursive method for the reconstruction of x 1. The next is a recursive method for expressing x 1 as a function of y and its derivatives. It consists in the successive construction of the vectors M k, Σ x 1 (t), which leads to the construction of the vector M n1, Σ x 1 (t). Construction of the vector M n1, Σ x 1 (t) ( Σ :=A1,C 1,D 1,F ) : 1. Defining ξ 1 (y) :=(F C 1 ) F y, the following equality is obtained 2. defining ξ 2 (y, ẏ) := ξ 1 (y) = ( F C 1 ) F C 1 x 1 = M 1, Σ x 1 ; M 2, Σ T 1, Σ ξ 2 (y, ẏ) = k + 1. defining ξ k+1 (y, ẏ,...,y (k) ):= ξ k+1 ( y, ẏ,...,y (k)) = d dt F C 1 x 1 y M 2, Σ T 1, Σ, it is obtained F C 1 A 1 C 1 M k+1, Σ T k, Σ M k+1, Σ T k, Σ n 1. finally, defining ξ n1 (y, ẏ,...,y (n1 1) ):= ξ n1 ( y, ẏ,...,y (n1 1)) = x 1 = M 2, Σ x 1 ; d dt M k, Σ x 1 y Mk, ΣA 1 C 1 M n 1, Σ T n 1 1, Σ, it is obtained x 1 = M k+1, Σ x 1 ; d dt M n 1 1, Σ x 1 y,onegets M n 1, Σ T Mn1 1, ΣA 1 n 1 1, Σ x C 1 = M Σ x n1, 1. 1 From Lemma 3.4, the equivalence between (ii) and (iii) in Fact 1, and (3.4), we have that det M n1, Σ. Thus, after premultiplying by Mn 1 in the n 1, 1th stage, the Σ vector x 1 (t) can be expressed by means of the following formula: ( (3.11) x 1 (t) =Mn 1 ξ 1, Σ n 1 y, ẏ,...,y (n1 1)) Equation (3.11) means that x 1 always can be reconstructed by means of linear combinations of the terms of the vector y and their derivatives.

7 UNKNOWN INPUT AND STATE ESTIMATION Procedure for the estimation of x 2. As we have seen below, the strong detectability property is a necessary condition for the asymptotic estimation of the entire state vector. Therefore, it is assumed that the dynamic system associated to Σ is strongly detectable. It implies, from Theorem 3.6 and Lemma 3.7.b, that rank D1 T F T T + = q. Hence, from (3.8), w can be rewritten as w = D1 x1 A 1 x 1 F y C 1 x 1, and its substitution into x 2 gives + D1 x x 2 = A 4 x 2 + A 2 x 1 + D 1 A 1 x 1 2. F y C 1 x 1 Now, let ẑ 2 be the state observer for x 2 defined by + D1 x1 ẑ 2 = z 2 + D 2 F + D1 A1 x z 2 = A 4 ẑ 2 + A 2 x 1 D 1 2, F C 1 x 1 y where x 1 is supposed to be reconstructed from (3.11) using the recursive method given in Thus, the error x 2 ẑ 2 is governed by the equation x 2 ẑ 2 = A 4 ( x 2 ẑ 2 ). By the assumption that (3.1) is strongly detectable, from Theorem 3.6 and Lemma 3.7, we have ẑ 2 (t) x 2 (t). t The previous scheme together with the definition of strong detectability gives rise to the following result. Remark 3. Using the output of the system and a linear combinations of its derivatives, the strong detectability turns out to be a necessary and sufficient condition for the asymptotic estimation of x. 4. Necessary and sufficient conditions for the estimation of w (t). In this section, we will show that for the estimation of w the system may be nonstrongly detectable, even further the pair (A, C) may be nondetectable. We will show that the necessary and sufficient condition to estimate w has to do with the set of invariant zeros of Σ = (A, C, D, F ) and the set of eigenvalues related to the unobservability of the pair (A, C). For that purpose we decompose the dynamics of the vector x 2 in (3.8), where the second part of this decomposition corresponds to the unobservable part of (A, C). Since the proof of sufficiency of theorem establishing the conditions under which the estimation of w can be carried out is constructive, we give at the same time the main procedure for the estimation of w. Let x w,x be the solution of the differential equation ẋ (t) =Ax (t)+dw, x () := x. Let y w,x (t) =Cx w,x + Fw. Thus, x w,x = Px w,x is governed by the set of equations (3.8). Now, let us recall the definition of left invertibility. The left invertibility concept in the time domain framework can be found in 6 for the case F =, and in 26 for the general class of inputs that are impulsive-smooth distributions. The definition given below is quite similar to the second one. Definition 4.1. The system Σ is called left invertible if for any w 1 (t), w 2 (t) R q the following statement holds: y w1,x (t) =y w2,x (t) for all t implies w 1 (t) = w 2 (t) for all t. It is clear that left invertibility is a necessary condition for the estimation of w (t). However, we will see afterwards that it is not a sufficient one. That is quite obvious

8 1162 F.J. BEJARANO, L. FRIDMAN, AND A. POZNYAK because the fulfilment of the left invertibility property depends on the knowledge of x, which is not the case considered here. Lemma 4.2. Σ is left invertible if, and only if, rank D 1 F = q. The next corollary follows directly from Lemmas 3.7 and 4.2. Corollary 4.3. Σ is left invertible if, and only if, the set of invariant zeros of Σ is finite. Let N be the unobservable subspace corresponding to the pair (A, C), that is, the greatest subspace satisfying (4.1) AN N and CN =. It is clear by the definition of VΣ that N VΣ. Let O be the observability matrix of the pair (A, C); it is well known that N =kero. Let N be a full column rank matrix whose columns form a basis of N.Thus,we can chose a full column rank matrix V forming a basis of V adapted to N,thatis, V must have the following form (4.2) V = V N. Defining n 2 := dim N,wehavethat V R n n (n1+n2), N R n n2. Proposition 4.4. If rank D 1 F = q and V has the form (4.2), the matrices K and Q satisfying (3.6) take the form (4.3) K = K1 Q1, Q = Q 2 Q 4 for some matrices K1 R q (n n1 n2), Q 1 R (n n1 n2) (n n1 n2), Q 2 R n2 (n n1 n2),andq 4 R n2 n2. Thus, under the assumption that rank D 1 F = q, and taking into account (3.7), (4.2), and (4.3), we have that the matrix A 4 in (3.8) takes the following partitioned form (4.4) A 4 := V + ( A + D K ) Q1 A41 V = =:, Q 2 Q 4 A 42 A 44 where A 41 := Q 1, A 42 := Q 2,andA 44 := Q 4. Therefore, partitioning the vector x 2 =: x 21(t) x 22(t) and from (4.3) and (4.4), the system (3.8) can be rewritten as x 1 (t) x 21 (t) = A 1 A 21 A 41 x 1 (t) x 21 (t) + D 1 D 21 w (t) (4.5) x 22 (t) A 22 A 42 A 44 x 22 (t) D 22 y (t) =C 1 x 1 (t)+f w (t) w (t) =w (t) K1 x 21 (t), where x 21 R n (n1+n2) and x 22 R n2. The matrices A 2 and D 2 given in (3.8) were partitioned as follows: A21 D21 := A A 2, := D 22 D First, we will show some facts that will be important in the procedure for finding the conditions under which we can estimate w.

9 UNKNOWN INPUT AND STATE ESTIMATION 1163 Definition 4.5 (see 26). The constant λ C is said to be an (A, C)-unobservable eigenvalue if rank λi A C <n. Lemma 4.6. If rank D 1 F = q, then: (a) the set of (A, C)-unobservable eigenvalues is identical to the set of eigenvalues of A 44,and (b) the set of invariant zeros of Σ that do not belong to the set of (A, C)-unobservable eigenvalues is identical to the set of eigenvalues of A 41. Theorem 4.7. The following claims are equivalent. (i) For any initial condition x (), (4.6) y (t) =for all t implies w (t) =for all t. (ii) The set of invariant zeros of Σ is identical to the set of (A, C)-unobservable eigenvalues. (iii) Σ is left invertible and V N. (iv) 5 rank D 1 F = q and rank Mn,Σ =ranko. Theorem 4.8. The following sentences are equivalent. (i) For any initial condition x (), (4.7) y (t) =for all t implies w (t). t (ii) The set of invariant zeros of Σ that do not belong to the set of (A, C)- unobservable eigenvalues is in C. (iii) rank D 1 F = q and the set of eigenvalues of A41 is in C. The following theorems establish the conditions, in terms of the invariant zeros of Σ := (A, C, D, F ) andthe(a, C)-unobservable eigenvalues, under which the estimation of w (t) can be carried out. Theorem 4.9. Based on the measurement of y (t), the vector w can be estimated if, and only if, the set of invariant zeros of Σ that do not belong to the set of (A, C)- unobservable eigenvalues is in C. Theorem 4.1. Based on the measurement of y (t), the vector w can be reconstructed in finite time if, and only if, the set of invariant zeros of Σ is identical to the set of (A, C)-unobservable eigenvalues. We should notice that if, in addition to the condition of Theorem 4.9, the system Σ satisfies the condition rank CD F F =rankf +q, then one can avoid using derivatives for the estimation of w. Because of, in such a case, x 1 can be estimated asymptotically by using a linear observer (see, e.g., 16). This can be summarized in the following theorem. Theorem The vector w can be estimated from the system output y, without using any derivatives, if, and only if, the following two conditions are fulfilled: (1) the set of invariant zeros of Σ that do not belong to the set of (A, C)-unobservable eigenvalues is in C,and (2) rank CD F F =rank(f )+q. 5. Design of a robust observer. The following restriction will be assumed to be satisfied throughout this section. A1 The dynamic system associated to Σ = (A, C, D, F ) is strongly detectable. Now, we will apply the scheme of design proposed in section 3 to the system (2.1) for the state vector estimate. 5 M n is given by (3.3) and O is the observability matrix of (A, C).

10 1164 F.J. BEJARANO, L. FRIDMAN, AND A. POZNYAK Thus, with P selected according to (3.5), after defining x := Px, and with the partition x =: x 1 x 2,wehave x 1 (t) A1 x1 (t) B1 D1 = + u + w (t) x 2 (t) A 2 A 4 x 2 (t) B 2 D 2 (5.1) y (t) =C 1 x 1 (t)+f w (t) w (t) =w (t) K x 2 (t), where the system and distribution matrices are defined according to (3.9), and B 1 = M n,σ B and B 2 = V + B Bounding term. In the recursive method given in 3.4.1, some derivatives on time are needed; here, we suggest to use the super-twisting algorithm for the obtaining of the required derivatives. However, the super-twisting algorithm requires some bound of the state vector that is to be reconstructed; therefore, for ensuring the bound required we will use the following Luenberger observer. (5.2) ż = PAP 1 z + PBu+ L ( y CP 1 z ) The matrix PAP 1 LCP 1 must be Hurwitz. Such a requirement can always be satisfied, and A1 and the Proposition 3.8 guarantee its fulfilling. Defining ē = x z, we get the inequality ē γ exp ( λt) ē () + μw + for some positive constants γ, λ, andμ. Now, let us make a partition of ē into two vectors, i.e., ē 1 = x 1 z 1 and ē 2 = x 2 z 2,wherez T =: z1 T z2 T and z1 R n1, z 2 R n n1. Let ζ be a constant satisfying ζ>μw +, then, after a finite time T,ē 1 and ē 2 stay bounded, i.e., ē 1 (t) <ζand ē 2 (t) <ζ, for all t T Reconstruction of M n1, Σē 1 (t). Now, the state estimation procedure of will be applied to ē 1.Thus,onceē 1 is reconstructed, x 1 can be recovered by the formula x 1 =ē 1 + z 1. Firstly, recall that Σ :=(A 1,C 1,D 1,F). Now, let us design the auxiliary vector σ defined by the equation (5.3) σ (t) =A 1 z 1 (t)+b 1 u. Define the first sliding variable s 1 as follows: ( ) s 1 F (t) = M 2, Σ T C 1 F (y (t) C 1 σ (t)) t 1, Σ t (y (τ) C v 1 (τ) dτ. 1z (τ)) dτ Thus, taking the derivative of s on time, and because of (3.3), we have (5.4) ṡ 1 (t) =M 2, Σē 1 (t) v 1 (t). We design the output injection vector v 1 using the super-twisting technique (17, 18), involving not only a sign function but also its integral, that is, vi (5.5) 1 = v1 i + λ 1 s 1 1/2 i sign s 1 i v i 1 = α 1 sign s 1 i, where vi 1 is the ith term of the vector v 1, and the same applies for v i 1 and s 1 i. The constants α 1 and λ 1 are selected to satisfy the inequalities: κ 1 M2, ( Σ PAP 1 LCP 1 ζ + PD LF w + ) α 1 > κ 1, λ 1 > (1 + θ)(κ 1 + α 1 ) 2,1>θ>, 1 θ κ 1 α 1

11 UNKNOWN INPUT AND STATE ESTIMATION 1165 where ζ was defined in subsection 5.1. Thus, according to 18, we have a second-order sliding mode, that is, s 1 (t) =ṡ 1 (t) = for all t t 1 where t 1 is the reaching time to the sliding mode. Therefore, from (5.4) and (5.5), we have that (5.6) v 1 (t) =M 2, Σē 1 (t), for all t t 1. We can follow a quite similar scheme for the reconstruction of M 3, Σē 1 (t). Namely, design the variable s 2 (t) as s 2 v (t) = M 1 3, Σ T (t) M 2, 2, Σ Σ (σ (t) z 1 (t)) t t (y (τ) C v 2 (τ) dτ. 1z (τ)) dτ Hence, taking into account (3.3) and (5.6), for t t 1, the derivative of s 2 (t) is (5.7) ṡ 2 (t) =M 3, Σē 1 (t) v 2 (t). Again, the output injection vector v 2 is designed using the super-twisting algorithm, (5.8) vi 2 = v2 i + λ 2 s 2 1/2 i sign s 2 i v 2 i = α 2 sign s 2 i. The positive constants α 2 and λ 2 should satisfy the following upper bounds: κ 2 M3, ( Σ PAP 1 LCP 1 ) ζ + PD LF w + α 2 > κ 2, λ 2 > (1+θ)(κ2+α2) 2 1 θ κ 2 α 2,1>θ>. Then, according to 18, we have that s 2 (t) =ṡ 2 (t) = for all t after t 2,whichis the reaching time to the second sliding mode. Hence, in view of (5.7) and (5.8), we achieve the equality v 2 (t) =M 3, Σē 1 (t) for all t t 2. We can generalize the previous procedure for the reconstruction of M k, Σē 1 (t)(k = 2,...,n 1 1). Since, in this procedure, the main goal is the reconstruction of ē 1 (t), in the last step we will reconstruct directly ē 1 (t) instead of recovering M Σē n1, 1 (t). The procedure is detailed below. (a) Sliding variable s 1 : (F ) (5.9) s 1 (t) = M 2, Σ T C 1 F t (y (t) C 1 σ (t)) 1, Σ t (y (τ) C v 1 (τ) dτ; 1z 1 (τ)) dτ sliding variable s k, k =2,...,n 1 2: (5.1) s k (t) = sliding variable s n1 1 : (5.11) M k+1, Σ T k, Σ s n1 1 (t) = M n1, Σ 1 M t v n1 1 (τ) dτ, where σ (t) is defined from (5.3). v k 1 (t) M k, Σ (σ (t) z 1 (t)) t (y (τ) C 1z 1 (τ)) dτ n 1, Σ T n 1 1, Σ t v k (τ) dτ; n v 1 2 (t) M n1 1, Σ (σ (t) z 1 (t)) t (y (τ) C 1z 1 (τ)) dτ

12 1166 F.J. BEJARANO, L. FRIDMAN, AND A. POZNYAK (b) Output injection vector v k (k =1,...,n 1 1), vi k (5.12) = vk i + λ k s k 1/2 i sign s k i v i k = α k sign s k i being vi k the ith term of the vector v k and v i k the ith term of the vector v k. The constants α k and λ k are designed according with 17 and 18: κ k M k+1, ( Σ PAP 1 LCP 1 ζ + PD LF w +), k =1,...,n 1 2 κ k PAP 1 LCP 1 ζ + PD LF w +, k = n 1 1 α k > κ k, λ k > (1 + θ)(κ k + α k ) 2,1>θ>, k =1,...,n 1 1, 1 θ κ k α k where ζ wasdefinedin5.1andw + is the bound of w. The procedure for the reconstruction of ē 1 (t) is given in the following theorem. Theorem 5.1 (7). Following the design of s k and v k as in (5.9) (5.12), we obtain the equalities (5.13) (5.14) v k (t) =M k+1, Σē 1 (t) for all t t k, k =1,...,n 1 2 v n1 1 (t) =ē 1 (t) for all t t n1 1, where t k is the reaching time to the kth sliding mode Observation of x 1. Now, based on the recursive method given in 3.4.1, we have found the difference between the state vector and the Luenberger observer. It means that, following the method of design given previously in this section, we have that (5.15) x 1 (t) =z 1 (t)+ v n1 1 (t) for all t t n1 1. The equality (5.15) motivates us to propose the reconstruction of the state x 1 (t) by means of (5.16) ẑ 1 (t) :=z 1 (t)+ v n1 1 (t). Theorem 5.2. Designing ẑ 1 (t) according to (5.16), we achieve the identity (5.17) ẑ 1 (t) = x 1 (t) for all t t n1 1. Proof. It follows immediately by comparing (5.15) and (5.16) Observation of x 2. Now, let us design an observer for the vector x 2 given by (5.1). This is made by means of ẑ 2 which is designed as + D1 ẑ1 (5.18a) ẑ 2 = z 2 + D 2, F + D1 A1 ẑ z 2 = A 4 ẑ 2 + A 2 ẑ 1 + B 2 u D 1 + B 1 u (5.18b) 2. F C 1 ẑ 1 y Thus, taking into account (5.17) we can obtain the dynamic equation for the error between x 2 ẑ 2, i.e., x 2 (t) ẑ 2 (t) =A 4 ( x 2 (t) ẑ 2 (t)) for all t t n1 1.

13 UNKNOWN INPUT AND STATE ESTIMATION 1167 Due to the Assumption A1, Theorem 3.6, and Lemma 3.7, the matrix A 4 is Hurwitz; therefore, the asymptotic stability of x 2 ẑ 2 is ensured, which implies (5.19) ẑ 2 (t) x 2 (t). t 5.5. Observer for the original system. Thus, defining ẑ T = ẑ1 T ẑ2 T,and from (5.16) and (5.19), we conclude that (5.2) ẑ (t) t x (t). Due to the coordinates change x = Pxthat we have used previously (P was defined in (3.5)), we have that the observer ˆx for the original state vector has to be designed as (5.21) ˆx (t) =P 1 ẑ (t) =P 1 ẑ1 (t) ẑ 2 (t) with ẑ 1 and ẑ 2 defined from (5.16) and (5.18), respectively. Theorem 5.3. The observer ˆx given by (5.21) converges to the original state vector x. That is, ˆx x (t). t Proof. It is clear from (5.2) and (5.21). 6. Identification of unknown inputs w(t) (General case). Consider again the system (2.1). Here, we apply the results obtained in section 4 for the estimation of the unknown inputs in the general case. That is, the proposed algorithm is not required to estimate the entire state vector since it is based on the necessary and sufficient conditions obtained at the end of section 4. Using the transformation P defined according to (3.5), but with V selected according to (4.2), we have that the dynamic equations for the transformed system x = Px takes the form (6.1) x 1 (t) x 21 (t) x 22 (t) = A 1 A 21 A 41 x 1 (t) x 21 (t) + B 1 B 21 u + D 1 D 21 w (t) A 22 A 42 A 44 x 22 (t) B 22 D 22 y (t) =C 1 x 1 (t)+f w (t) w (t) =w (t) K 1 x 21 (t), where x 1 R n1, x 21 R n n1 n2,and x 22? R n2. The partitions of the system and distribution matrices comes from (3.9) and (4.4). The matrices not defined yet are B 1 := M n,σ B, B 21 B 22 := V + B, D 21 D 22 := V + D. Since in the section 4 we have found the conditions under which we can estimate w ( ), throughout this section we will assume that: 6 B1 The set of the invariant zeros of Σ = (A, C, D, F ) that do not belong to the set of the (A, C)-unobservable eigenvalues is in C. Moreover, the use of the super-twisting algorithm as a differentiator imposes other restrictions related to the smoothness and boundedness of w (t), i.e., B2 There is a known constant α w such that ẇ (t) α w. 6 It should be noticed that the assumption B1 is a structural assumption; meanwhile B2 is an assumption required by the algorithm used (super-twisting) to estimate the needed derivatives.

14 1168 F.J. BEJARANO, L. FRIDMAN, AND A. POZNYAK Step 1.a Estimation of x 1. As was established in section 4, for the estimation of w (t) itisenoughtoestimate the states x 1 and x 21 (eveninthecasewhen x 22 cannot be estimated). Therefore, we can estimate the reduced vector x T 1 x 21 T T following the same procedure given in the previous section for estimating x. In other words, to estimate x 1 and x 21, we should follow the procedure of the previous section, but using the reduced vector T x 1 x T T 21 instead of all the vector x = T x 1 x T T 2. Thus, z in subsection 5.1 becomes z := z 1 z (z1 21 R n1, z 21 R n n1 n2 ), and its dynamics is governed by the equations (6.2) ż = Āz Bu + L ( y Cz ), where A1 D Ā = 1 K1 B1 D1 A 21 A 41 D 21 K1, B =, D =, C = C1 FK B 21 D Notice that H (s)= si A 1 D 1 K1 A 21 si (A 41 D 21 K1 ) = si A 1 D 1 A 21 si A 41 D 21 I I C 1 FK1 C 1 F K1 I Since Σ has no invariant zeros (Lemma 3.4) and rank D1 T F T T = q (B1 and Theorem 4.8), the matrix H (s) loses rank only for s being an eigenvalue of A 41. Thus, by the assumption B1 and Theorem 4.8, A 41 is Hurwitz and, consequently, the pair (Ā, C) is detectable. Then, selecting the matrix L R (n n2) p in such a way that (Ā L C) ishurwitz,wehave,forē 1 := x 1 z 1, the inequality ē 1 γ exp ( λt) ē 1 () + μw + for some constants γ, λ, μ. Therefore, for ζ satisfying ζ>μw +, we obtain the inequality ē 1 (t) <ζ. Thus, we estimate x 1 by means of ẑ 1 given from (5.16) that is designed following the same procedure used in (5.9) (5.12), but with z 1 from (6.2). Step 1.b Estimation of x 21. The vector x 21 must be estimated by means of ẑ 21 that has to be designed in the following form: ẑ 21 = z 21 + D 21 D1 F z 21 = A 41 ẑ 21 + A 21 ẑ 1 + B 21 u D 21 D1 F + ẑ1 + A1 ẑ 1 + B 1 u C 1 ẑ 1 y Thus, from (6.1), we have that the dynamic equation for the difference ( x 21 ẑ 21 )is d dt ( x 21 (t) ẑ 21 (t)) = A 41 ( x 21 (t) ẑ 21 (t)), but the Assumption B1 and Theorem 4.8 implies that A 41 is Hurwitz. Therefore, (6.3) ẑ 21 (t) t x 21 (t).

15 that (6.4) UNKNOWN INPUT AND STATE ESTIMATION 1169 Step 2 Estimation of w (t) Let us define r := rank F. If r<q, define G R q q as a nonsingular matrix so D1 D11 D G = 12, F F F 2 R p r, rankf 2 = r. 2 If r = q, G := I q q. Now, let us make a partition of G 1 as (6.5) G 1 =: Ḡ1 Thus, from (6.4) and (6.5), we have Ḡ 2, Ḡ1 R (q r) q, Ḡ2 R r q. y (t) =C 1 x 1 (t)+fgg 1 w (t) = C 1 x 1 (t)+f 2 Ḡ 2 w (t). Hence, premultiplying the last equation by F 2 + rows of w, i.e.,, we obtain a linear combination of the (6.6) Ḡ 2 w (t) =F + 2 y (t) F + 2 C 1 x 1 (t). Therefore, Ḡ2w (t) can be written as (6.7) Ḡ 2 w (t) =F + 2 y (t) F + 2 C 1 x 1 (t)+ḡ2k 1 x 21 (t). Now, let z w be the state vector of the auxiliary system characterized by the equation ( ż (6.8) w (t) =A 1 ẑ 1 (t)+b 1 u + D 11 uw (t) Ḡ1K1 ẑ21 (t) ) ( + D 12 F + 2 y (t) F 2 + C 1ẑ 1 (t) ). Let us estimate Ḡ1 w (t) using a sliding mode technique, specifically, the super-twisting. We design the sliding variable ξ in the following way: (6.9) ξ (t) =D + 11 ẑ 1 (t) z w (t). Thus, in view of the identity D 1 w = D 1 GG 1 w = D 11 Ḡ 1 w + D 12 Ḡ 2 w, from (6.1), (6.6), and (6.8), we achieve the equality for all t t n1 1. Then, using ξ (t) =Ḡ1w (t) u w (t) Ḡ1K 1 ( x 21 (t) ẑ 21 (t)) u w (t) =ū w (t)+λ ξ 1/2 sign ξ ū w (t) =α sign ξ (t), κ α>, <θ<1, α α w, λ> 1 θ(κ+α) 1+θ 2 κ α there is a reaching time t w to the second-order sliding mode (ξ (t) = ξ (t) =,forall t t w >t n1 1). Hence, we get (6.1) ū w (t) =Ḡ1w (t) Ḡ1K 1 ( x 21 (t) ẑ 21 (t)).

16 117 F.J. BEJARANO, L. FRIDMAN, AND A. POZNYAK Thus, from (6.3), ū w (t) Ḡ 1 w (t). t Thus, the estimate of w (t) is done by means of (6.11) ŵ (t) =G ū w (t) F + 2 y (t) F + 2 C 1ẑ 1 (t)+ḡ2k 1 ẑ21 (t) In view of (6.7), (6.1), and (6.5), we achieve the equality ŵ (t) =w (t) K1 ( x 21 (t) ẑ 21 (t)) + G F 2 + C 1 ( x 1 (t) ẑ 1 (t)) However, from (5.17) and (6.3), we conclude that ŵ (t) converges asymptotically to w (t), i.e., (6.12) ŵ (t) w (t). t Remark 4. It should be noticed that, for the case when B1 is fulfilled with rank M n =ranko (Theorem 4.7), x T = x 1 x 22. Therefore, the limit in (6.12) becomes in the equality ŵ (t) =w (t), for all t t w >t n Numerical examples. Here we give two numerical examples. The first one is to show the scheme of design for the estimation of the state of a strongly detectable system. The second example shows the scheme of design for the estimation of the unknown inputs of a system which is not strongly detectable Example 1. Consider the following academic example. Let a linear system be governed by the following equations: ẋ x 1.43 ẋ 2 ẋ 3 ẋ 4 = x x x (u + w 1 ) 1.2 ẋ x }{{}}{{} A B 1 1 y = x + w 1 2 }{{}}{{} C F Defining w = T w 1 w 2, D = B 5 1,andF = 2 1 F, this linear system takes the form of (2.1). In the simulations was used a control given by u = K ˆx + 1.5sin(2t), K = The unknown inputs are w 1 = 2sin(2t)+.47 and w 2 = sin (2t)+.53. It can be verified that the set of the invariant zeros of Σ is { i, i, 3.27}. Therefore, the system Σ is not strongly observable, but, from Theorem 3.6, it is strongly detectable. Construction of the hierarchical observer for x 1. The matrices M 1, Σ and M 2, Σ, computed following (3.3) for Σ = Σ, take the form M 1, Σ = 1 and M 2, Σ = 1 1. As we can anticipate from Lemma 3.4 the matrix M2, Σ is invertible. We..

17 UNKNOWN INPUT AND STATE ESTIMATION 1171 ē Time s ē Time s Fig Error of observation ē 1 = x 1 ẑ 1 and ē 2 = x 2 ẑ 2, for Example 1. construct the Luenberger observer as in (5.2). Next, we construct σ as in (5.3). In this case n 1 := dim M 2, Σ = 2; therefore, it is needed to design only one sliding surface s 1 R 2, which is designed as (F C 1) F s {}}{ t (t) = (y (t) C 1 σ (t)) v 1 (τ) dτ The matrices takes the form } {{ } M 1 2, Σ }{{} M 2, Σ T 1, Σ t (y (τ) C 1z 1 (τ)) dτ M 2, Σ and T 1, Σ are designed following (3.3). The output injection v 1 v 1 i = v1 i +2 s 1 i 1/2 sign s 1 i v 1 i =15signs1 i, i =1, 2 Thus, we have that the reconstruction of x 1 is done by ẑ 1 (t) =z 1 (t)+v (n1 1) 1 (t). The observer ẑ 2 (t)for x 2 is designed according to (5.18), for this observer any gain is not needed to be calculated. In the Figure 7.1 the observation errors ē 1 = x 1 ẑ 1 and ē 2 = x 2 ẑ 2 are drawn. For the simulations we use a sampling step of 1 4. Then the hierarchical observer for the original state x is designed as ˆx := P 1 ẑ. The trajectories of x (t) together with the trajectories of its observer ˆx (t) are depicted in Figure Example 2. Consider the following nonstrongly detectable system. ẋ 1 1 x 1 ẋ 2 ẋ 3 ẋ 4 = 1 x x w 1 w 2 1 x 4 w 3 ẋ 5 1 x 5 1 }{{}}{{}}{{}}{{} w A x D y = 1 1 x + w 1 1 }{{}}{{} C F

18 1172 F.J. BEJARANO, L. FRIDMAN, AND A. POZNYAK x1, ˆx1 x2, ˆx2 x3, ˆx Time s x4, ˆx4 x5, ˆx Time s Fig Trajectories of x (t) (solid) and ˆx (t) (dashed), for Example 1. Next we present the matrix V that forms a basis of VΣ (see (3.4) and(4.2)) and the matrix N thatformsabasisofn (see (4.1)). Also the matrix P that changes the coordinates of the system is written below (see (3.5)). 1 V = N = 1, P = Thus, by the change of coordinates x = Px, we get the decomposition obtained in (4.5). x 1, x 1,1 1 x 1, x 1,3 x 1,4 = x 1, x 1,3 + w 1 w 2 1 x 1,4 1 w 3 x 22 1 A44 x 22 }{{} x y = 1 1 x + w 1 w It can be verified that rank D1 T F T T =3,rankM4,Σ =ranko =4. Thus, the condition of theorem 4.9 is accomplished, which implies that w can be reconstructed in a finite time. In this case the weakly unobservable subspace corresponding to Σ = (A, C, D, F ) and the unobservable subspace corresponding to (A, C) are identical. It means that A 4 = A 44 and, consequently, x 21 does not exist. Therefore, for the reconstruction of the w only x 1 has to be reconstructed. Nevertheless, since A 44 =, according to Lemma 3.7, the system Σ has only one invariant zero, which is equal to zero. Hence, Σ is nonstrongly detectable, but also notice that (A, C) is nondetectable. Hence, the state vector cannot be estimated neither in finite time nor asymptotically. w 3

19 UNKNOWN INPUT AND STATE ESTIMATION w w w Time s Fig Comparison between w i (solid) and their estimate ŵ i (dashed), i =1, 2, 3, for Example 2. Using the method proposed in section 6 we can estimate the unknown inputs vector w. Firstly, for the estimation of x 1 the use of two sliding surfaces was needed, s 1 and s 2, designed according to 5.9 and The next step was the estimation of w. The estimate of w i is given by ŵ i (i =1, 2, 3), respectively, and it is shown in Figure 7.3. Conclusions. We have shown that, for a system with unknown inputs appearing explicitly in both the state equations and the system output, the strong detectability is a necessary and sufficient condition for the estimation of the original state vector. Since if the system is not strongly observable, it is impossible to design a standard differential observer providing state estimate. Hence, we have proposed another approach related to the design of an algebraic-type observer. We have shown that the suggested approach can successfully work for both strongly observable and strongly detectable systems. Thus, we have proposed to decompose the system into two subsystems. The first one is strongly observable for the zero control input. The second one is not strongly observable but can be detected. Thus, in the new coordinates, one uses the output of the system and its derivatives unaffected by the unknown input to reconstruct the state vector of the first subsystem. For the second subsystem one needs to design an observer that converges asymptotically to the state vector of the second subsystem. This scheme of design brings as a result an observer whose trajectories converge to those of the original state vector and whose rate of convergence does not depend on the unknown inputs. Furthermore, we have shown that left invertibility is not a sufficient condition under which the estimation can be carried out. But also we have shown that a system can be nonstrongly detectable, even nondetectable as we saw in the Example 2, and the estimation of the unknown inputs can still be carried out. Perhaps as the most important result of this paper, we have proven that the necessary and sufficient condition under which the estimation of the unknown inputs can be carried out is that the set of the invariant zeros of the system (with respect to the unknown inputs) that do not belong to the set of unobservable eigenvalues is in the interior of the set of complex numbers with negative real part. Based on these results we have proposed a scheme of design for the estimation of the unknown inputs.

20 1174 F.J. BEJARANO, L. FRIDMAN, AND A. POZNYAK Appendix. Proofs of propositions, lemmas, and theorems. Proof of Lemma 3.4. From (3.1), dim ker M n,σ K,P = dimkerm n,σ. Then, applying (3.3) for calculating M n,σ K,P,wegetthatM n,σ K,P = M n1, Σ,where M n,σ K,P Rn1 n, M n1, Σ R n1 n1. TakingintoaccountthatrankM n,σ K,P = n 1, one can conclude that ker M n1, Σ = V Σ =. Proof of Lemma 3.7. From (3.8) and by a rearranging of matrices we get =rank P rank P (s) =rank I C ( ) si P A + D K ( ) P 1 PD C + F K P 1 F si A D F =rank P 1 K P 1 I si A 1 D 1 C 1 F A 2 D 2 si A 4 From Lemma 3.4 and Fact 1, Σ :=(A 1,C 1,D 1,F) has no invariant zeros. This means that for the case rank D1 T F T T = q the only way that the previous arrangement of matrices can lose rank is when s is an eigenvalue of A 4. This proves the clause a). On the other hand, if rank D1 T F T T <q, there exists a nonsingular matrix 7 G R q q so that D 1 F G = H1 H 2. Hence, rank P (s) =rank si A 1 H 1 C 1 H 2 A 2 D 21 D 22 si A 4 with P (s) defined in (3.2) and D 21 D 22 := D2 G. Thus, P (s) losesrankfor every s C, so the clause b) is proven. Proof of Lemma 4.2. Necessity: suppose rank D1 T F T T < q. Then there is a constant vector v R q, v, so that D1 T F T T v =. Let us choose w 1 (t) =K x 2 (t)+v and w 2 (t) =K x 2 (t). Thus, for x =, from (3.8), we have that y w1,x (t) =y w2,x (t) = ; meanwhile, w 1 (t) w 2 (t) =v. Thus, the necessity is proven. Sufficiency: suppose rank D1 T F T T = q. Let w1 (t) andw 2 (t) betwoinputs so that y w1,x (t) = y w2,x (t) for all t. Now, notice that the equality x w1,x (t) x w2,x (t) =x,w1 w 2 (t) is valid for any initial condition x (by notation, x,w1 w 2 () = ). Thus, the initial condition for the transformed system x,w1 w 2 (t) with unknown input w 1 (t) w 2 (t) is x,w1 w 2 () = Px,w1 w 2 =. Furthermore, we have that y,w1 w 2 (t) =y w1,x (t) y w2,x (t) = for all t. This, due to the fact that Σ =(A 1,C 1,D 1,F) is strongly observable, implies that x 1,w1 w 2 (t) =for all t, which, from (3.8), leads to the equality (A.1) w 1 (t) w 2 (t) K x 2,w1 w 2 (t) =,forallt Therefore, we have that x 2,w1 w 2 (t) =A 4 x 2,w1 w 2 (t), and since x w1 w 2 () =, we get x 2,w1 w 2 (t). The last equality and (A.1) imply that w 1 (t) =w 2 (t), which proves the sufficiency. 7 D1 Actually, G is used to divide the matrix J :=. Indeed, let G F 2 be a matrix whose columns span the kernel of J, andletg 1 beamatrixsothatg = G 1 G 2 is not singular. Thus, JG = H..

21 UNKNOWN INPUT AND STATE ESTIMATION 1175 ProofofProposition4.4. Partitioning K and Q as K =: K1 K 2 Q =: Q1 Q 3 Q 2 Q 4, the equations in (3.6) can be rewritten in the form A V N + D K 1 K2 = VQ1 + NQ 2 VQ3 + NQ 4 C V N + F K 1 K2 = and From there we can obtain the equations (A.2) (A.3) AN + DK2 = VQ 3 + NQ 4 CN + FK2 = Taking into account that M n,σ V =andcn =, we achieve the identities M n,σ AN + D 1 K2 = FK2 = Furthermore, since N spans N, AN N VΣ,andM n,σvσ =, then, M n,σan =. Therefore, D 1 F K 2 =, which, from the first assumption of the proposition, implies K2 =. Moreover, since the span of AN belongs to the span of N and because of V and N are linearly independent, from (A.2) we conclude that Q 3 =. ProofofLemma4.6. For V given by (4.2) and from the Proposition 4.4, we have K N = K V + N =. Furthermore, ( A + D K ) V = VQ; thus, from (4.3) and (4.4), we get AN = NA 44. Hence, with P 1 = M + n,σ V N, we can decompose the pair (A, C) in its observable and unobservable part. Indeed, first let us make the following matrix transformation, Ā1 PAP 1 =, CP Ā 2 A 1 = C, 44 where Ā1 = M n,σ A M + n,σ V, Ā 2 = V + A M + n,σ V,and C = C M + n,σ V.Itis known that, for this kind of transformation, the pair (A 1, C) is observable (see, e.g., 26), and the (A, C)-unobservable eigenvalues are the eigenvalues of the matrix A 44, which proves the clause a). Besides, as it was established in Lemma 3.7, the invariant zeros of (A, C, D, F ) are the eigenvalues of A 4. Therefore, taking into account the specific form of A 4 obtained in (4.4), the set of invariant zeros of Σ that do not belong to the set of (A, C)-unobservable eigenvalues is identical to the set of eigenvalues of A 41, which proves Lemma b). Proposition A.1. Under the condition N VΣ, for any matrices V and K 1 satisfying (4.2) and (4.3), respectively, the pair (A 41,K1 ) is observable. Proof of Proposition A.1. Suppose that (A 41,K1 ) is unobservable, then there is a vector p sothata 41 p = λp and K1 p = for some scalar constant λ. Then, since A 41 = Q 1, from (3.6), (4.2), and (4.3), we have A λ Vp N = Vp N Q 2 p Q 4 C Vp N =. That is, the span of Vp N is an A-invariant subspace with dimension bigger than N, which is a contradiction since N is the greatest A-invariant subspace belonging to ker C.

22 1176 F.J. BEJARANO, L. FRIDMAN, AND A. POZNYAK ProofofTheorem4.7. Firstly, let us prove the equivalence (iii) (iv). If (iii) is true, from Lemma 4.2 rank D T 1 F T T = q. Furthermore, since ker Mn,Σ = V = N =kero, wehaverankm n,σ =ranko. On the other hand, if (iv) is true, then dim V =dimkerm n,σ = n rank M n,σ = n rank O =dimkero =dimn Therefore, V = N. The previous identity and Lemma 4.2 prove the implication ). The proof of (ii) (iv) is as follows. Supposing that (iv) is true, since V = N, P 1 = M + n,σ N and A 4 = A 44. Therefore, the set of eigenvalues of A 4 is at the same time the set of the invariant zeros of Σ and the set of (A, C)-unobservable eigenvalues. Thus, ) is proven. Now, suppose that (ii) is satisfied. Then, by Lemma 3.7.b), rank D1 T F T T = q. Moreover, by Lemma 3.7.a) and Lemma 4.6.a), A 4 = A 44, i.e., dim V = n n 1 n 2 =dimn, which implies rank M n,σ =ranko. Thus, the implication ) isproven. Now, let us prove the equivalence (i) (iv). Suppose V N, then by choosing x 1 () = and w (t) =k1 x 21 (t), we obtain the identities x 1 (t), y (t), and x 21 (t) =A 41 x 21 (t). Thus, for the proposition (A.1), if x 21 (),w. This means that for V N there exist the conditions such that w (t) in spite of y (t). Therefore, the claim (i) is achieved only if the identity V = N is true, that is, if rank M n,σ =ranko. Furthermore, it is clear that the left invertibility property is necessary for fulfilling (4.6). Therefore, by Lemma 4.2 we have that rank D1 T F T T = q, and(i) (iv) is proven. Now, suppose that (iv) is true and y (t). Then, we have, from Lemma 3.4, x 1 (t) and w (t). But, because of in this case V = N, thenw (t) = w (t). Therefore, (iv) (i) is proven. Part of the proof of Theorem 4.8 is based on the following proposition. Proposition A.2. The set of the invariant zeros of Σ that do not belong to the set of (A, C)-unobservable eigenvalues is in C if, and only if, D1 (A.4) rank = q and A F 41 is Hurwitz. Proof of Proposition A.2. Suppose rank D 1 F <q,thenanys C is an invariant zero of Σ (Lemma 3.7). Hence, since the set of (A, C)-unobservable eigenvalues is finite, in this case, there is a set (infinite) of invariant zeros of Σ that do not belong to the set of (A, C)-unobservable eigenvalues having positive real part. Thus, we have proven that rank D 1 F = q, which implies, due to Lemma 4.6, that A41 is a Hurwitz matrix. The sufficiency comes from (A.4) and Proposition 4.6.b). ProofofTheorem4.8. The equivalence (ii) (iii) follows directly from Proposition A.2. Now, suppose that the clause (i) is true. From the proof of necessity of Lemma 4.2, we have that the condition rank D 1 F = q is essential for fulfilling the clause (i). Now, selecting x 1 () = and w (t) =K21 x 21, we obtain the identities x 1 (t), x 21 (t) =A 41 x 21 (t), and y (t) ; therefore, w (t). Now, if V = N,thesetof eigenvalues of A 41 is empty. If V N, by Proposition A.1, w (t) tends to zero if, and only if, x 21 (t) tends to zero. Hence, we conclude that A 41 is Hurwitz. Thus, we have the implication (i) (iii). Now, suppose that (iii) is true. If y (t), we have x 1 (t), x 21 (t) = A 41 x 21 (t), and w (t) =K1 x 21 (t). Since A 41 is Hurwitz, it means x 21 (t) andso w (t). Thus, the implication (i) (iii) is proven.

Min-Max Output Integral Sliding Mode Control for Multiplant Linear Uncertain Systems

Min-Max Output Integral Sliding Mode Control for Multiplant Linear Uncertain Systems Proceedings of the 27 American Control Conference Marriott Marquis Hotel at Times Square New York City, USA, July -3, 27 FrC.4 Min-Max Output Integral Sliding Mode Control for Multiplant Linear Uncertain

More information

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Lecture 4 and 5 Controllability and Observability: Kalman decompositions 1 Lecture 4 and 5 Controllability and Observability: Kalman decompositions Spring 2013 - EE 194, Advanced Control (Prof. Khan) January 30 (Wed.) and Feb. 04 (Mon.), 2013 I. OBSERVABILITY OF DT LTI SYSTEMS

More information

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México Nonlinear Observers Jaime A. Moreno JMorenoP@ii.unam.mx Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México XVI Congreso Latinoamericano de Control Automático October

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Chapter 6: Controllability & Observability Chapter 7: Minimal Realizations May 2, 217 1 / 31 Recap State space equation Linear Algebra Solutions of LTI and LTV system Stability

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

1 Continuous-time Systems

1 Continuous-time Systems Observability Completely controllable systems can be restructured by means of state feedback to have many desirable properties. But what if the state is not available for feedback? What if only the output

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Stability, Pole Placement, Observers and Stabilization

Stability, Pole Placement, Observers and Stabilization Stability, Pole Placement, Observers and Stabilization 1 1, The Netherlands DISC Course Mathematical Models of Systems Outline 1 Stability of autonomous systems 2 The pole placement problem 3 Stabilization

More information

Finite-Time Converging Jump Observer for Switched Linear Systems with Unknown Inputs

Finite-Time Converging Jump Observer for Switched Linear Systems with Unknown Inputs Finite-Time Converging Jump Observer for Switched Linear Systems with Unknown Inputs F.J. Bejarano a, A. Pisano b, E. Usai b a National Autonomous University of Mexico, Engineering Faculty, Division of

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

Balanced Truncation 1

Balanced Truncation 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Lecture 4 Apr. 4, 2018 1 / 40 Recap Vector space, linear space, linear vector space Subspace Linearly independence and dependence Dimension, Basis, Change of Basis 2 / 40

More information

CDS Solutions to the Midterm Exam

CDS Solutions to the Midterm Exam CDS 22 - Solutions to the Midterm Exam Instructor: Danielle C. Tarraf November 6, 27 Problem (a) Recall that the H norm of a transfer function is time-delay invariant. Hence: ( ) Ĝ(s) = s + a = sup /2

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u) Observability Dynamic Systems Lecture 2 Observability Continuous time model: Discrete time model: ẋ(t) = f (x(t), u(t)), y(t) = h(x(t), u(t)) x(t + 1) = f (x(t), u(t)), y(t) = h(x(t)) Reglerteknik, ISY,

More information

A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1.

A q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1. A SPECTRAL CHARACTERIZATION OF THE BEHAVIOR OF DISCRETE TIME AR-REPRESENTATIONS OVER A FINITE TIME INTERVAL E.N.Antoniou, A.I.G.Vardulakis, N.P.Karampetakis Aristotle University of Thessaloniki Faculty

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

Model reduction for linear systems by balancing

Model reduction for linear systems by balancing Model reduction for linear systems by balancing Bart Besselink Jan C. Willems Center for Systems and Control Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen, Groningen,

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

RIEMANN SURFACES. max(0, deg x f)x.

RIEMANN SURFACES. max(0, deg x f)x. RIEMANN SURFACES 10. Weeks 11 12: Riemann-Roch theorem and applications 10.1. Divisors. The notion of a divisor looks very simple. Let X be a compact Riemann surface. A divisor is an expression a x x x

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations MATH 2 - SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. We carry out row reduction. We begin with the row operations yielding the matrix This is already upper triangular hence The lower triangular matrix

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1) EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

1 Similarity transform 2. 2 Controllability The PBH test for controllability Observability The PBH test for observability...

1 Similarity transform 2. 2 Controllability The PBH test for controllability Observability The PBH test for observability... Contents 1 Similarity transform 2 2 Controllability 3 21 The PBH test for controllability 5 3 Observability 6 31 The PBH test for observability 7 4 Example ([1, pp121) 9 5 Subspace decomposition 11 51

More information

Chapter III. Stability of Linear Systems

Chapter III. Stability of Linear Systems 1 Chapter III Stability of Linear Systems 1. Stability and state transition matrix 2. Time-varying (non-autonomous) systems 3. Time-invariant systems 1 STABILITY AND STATE TRANSITION MATRIX 2 In this chapter,

More information

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction

CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction CME 345: MODEL REDUCTION - Projection-Based Model Order Reduction Projection-Based Model Order Reduction Charbel Farhat and David Amsallem Stanford University cfarhat@stanford.edu 1 / 38 Outline 1 Solution

More information

Some Results Concerning Uniqueness of Triangle Sequences

Some Results Concerning Uniqueness of Triangle Sequences Some Results Concerning Uniqueness of Triangle Sequences T. Cheslack-Postava A. Diesl M. Lepinski A. Schuyler August 12 1999 Abstract In this paper we will begin by reviewing the triangle iteration. We

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

Disturbance Attenuation for a Class of Nonlinear Systems by Output Feedback

Disturbance Attenuation for a Class of Nonlinear Systems by Output Feedback Disturbance Attenuation for a Class of Nonlinear Systems by Output Feedback Wei in Chunjiang Qian and Xianqing Huang Submitted to Systems & Control etters /5/ Abstract This paper studies the problem of

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

2 Eigenvectors and Eigenvalues in abstract spaces.

2 Eigenvectors and Eigenvalues in abstract spaces. MA322 Sathaye Notes on Eigenvalues Spring 27 Introduction In these notes, we start with the definition of eigenvectors in abstract vector spaces and follow with the more common definition of eigenvectors

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Homogeneous Linear Systems and Their General Solutions

Homogeneous Linear Systems and Their General Solutions 37 Homogeneous Linear Systems and Their General Solutions We are now going to restrict our attention further to the standard first-order systems of differential equations that are linear, with particular

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations: Homework Exercises 1 1 Find the complete solutions (if any!) to each of the following systems of simultaneous equations: (i) x 4y + 3z = 2 3x 11y + 13z = 3 2x 9y + 2z = 7 x 2y + 6z = 2 (ii) x 4y + 3z =

More information

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19 POLE PLACEMENT Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 19 Outline 1 State Feedback 2 Observer 3 Observer Feedback 4 Reduced Order

More information

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. STATE EXAM MATHEMATICS Variant A ANSWERS AND SOLUTIONS 1 1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. Definition

More information

On the Stabilization of Neutrally Stable Linear Discrete Time Systems

On the Stabilization of Neutrally Stable Linear Discrete Time Systems TWCCC Texas Wisconsin California Control Consortium Technical report number 2017 01 On the Stabilization of Neutrally Stable Linear Discrete Time Systems Travis J. Arnold and James B. Rawlings Department

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

Lecture Notes of EE 714

Lecture Notes of EE 714 Lecture Notes of EE 714 Lecture 1 Motivation Systems theory that we have studied so far deals with the notion of specified input and output spaces. But there are systems which do not have a clear demarcation

More information

Design of Positive Linear Observers for Positive Systems via Coordinates Transformation and Positive Realization

Design of Positive Linear Observers for Positive Systems via Coordinates Transformation and Positive Realization Design of Positive Linear Observers for Positive Systems via Coordinates Transformation and Positive Realization Juhoon Back 1 Alessandro Astolfi 1 1 Department of Electrical and Electronic Engineering

More information

A Short Course on Frame Theory

A Short Course on Frame Theory A Short Course on Frame Theory Veniamin I. Morgenshtern and Helmut Bölcskei ETH Zurich, 8092 Zurich, Switzerland E-mail: {vmorgens, boelcskei}@nari.ee.ethz.ch April 2, 20 Hilbert spaces [, Def. 3.-] and

More information

CME 345: MODEL REDUCTION

CME 345: MODEL REDUCTION CME 345: MODEL REDUCTION Balanced Truncation Charbel Farhat & David Amsallem Stanford University cfarhat@stanford.edu These slides are based on the recommended textbook: A.C. Antoulas, Approximation of

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 7. Feedback Linearization IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs1/ 1 1 Feedback Linearization Given a nonlinear

More information

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Travis Schedler Tue, Oct 18, 2011 (version: Tue, Oct 18, 6:00 PM) Goals (2) Solving systems of equations

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

A method for computing quadratic Brunovsky forms

A method for computing quadratic Brunovsky forms Electronic Journal of Linear Algebra Volume 13 Volume 13 (25) Article 3 25 A method for computing quadratic Brunovsky forms Wen-Long Jin wjin@uciedu Follow this and additional works at: http://repositoryuwyoedu/ela

More information

6 Linear Equation. 6.1 Equation with constant coefficients

6 Linear Equation. 6.1 Equation with constant coefficients 6 Linear Equation 6.1 Equation with constant coefficients Consider the equation ẋ = Ax, x R n. This equating has n independent solutions. If the eigenvalues are distinct then the solutions are c k e λ

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Multiple-mode switched observer-based unknown input estimation for a class of switched systems

Multiple-mode switched observer-based unknown input estimation for a class of switched systems Multiple-mode switched observer-based unknown input estimation for a class of switched systems Yantao Chen 1, Junqi Yang 1 *, Donglei Xie 1, Wei Zhang 2 1. College of Electrical Engineering and Automation,

More information

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform

More information

5 Linear Transformations

5 Linear Transformations Lecture 13 5 Linear Transformations 5.1 Basic Definitions and Examples We have already come across with the notion of linear transformations on euclidean spaces. We shall now see that this notion readily

More information

The Cyclic Decomposition of a Nilpotent Operator

The Cyclic Decomposition of a Nilpotent Operator The Cyclic Decomposition of a Nilpotent Operator 1 Introduction. J.H. Shapiro Suppose T is a linear transformation on a vector space V. Recall Exercise #3 of Chapter 8 of our text, which we restate here

More information

CONTROL DESIGN FOR SET POINT TRACKING

CONTROL DESIGN FOR SET POINT TRACKING Chapter 5 CONTROL DESIGN FOR SET POINT TRACKING In this chapter, we extend the pole placement, observer-based output feedback design to solve tracking problems. By tracking we mean that the output is commanded

More information

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS October 4, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

More information

Solution for Homework 5

Solution for Homework 5 Solution for Homework 5 ME243A/ECE23A Fall 27 Exercise 1 The computation of the reachable subspace in continuous time can be handled easily introducing the concepts of inner product, orthogonal complement

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

Contents. 0.1 Notation... 3

Contents. 0.1 Notation... 3 Contents 0.1 Notation........................................ 3 1 A Short Course on Frame Theory 4 1.1 Examples of Signal Expansions............................ 4 1.2 Signal Expansions in Finite-Dimensional

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Control engineering sample exam paper - Model answers

Control engineering sample exam paper - Model answers Question Control engineering sample exam paper - Model answers a) By a direct computation we obtain x() =, x(2) =, x(3) =, x(4) = = x(). This trajectory is sketched in Figure (left). Note that A 2 = I

More information

Math 24 Winter 2010 Sample Solutions to the Midterm

Math 24 Winter 2010 Sample Solutions to the Midterm Math 4 Winter Sample Solutions to the Midterm (.) (a.) Find a basis {v, v } for the plane P in R with equation x + y z =. We can take any two non-collinear vectors in the plane, for instance v = (,, )

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS September 26, 207 EIGENVALUES AND EIGENVECTORS. APPLICATIONS RODICA D. COSTIN Contents 4. Eigenvalues and Eigenvectors 3 4.. Motivation 3 4.2. Diagonal matrices 3 4.3. Example: solving linear differential

More information

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs

On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs On Eigenvalues of Laplacian Matrix for a Class of Directed Signed Graphs Saeed Ahmadizadeh a, Iman Shames a, Samuel Martin b, Dragan Nešić a a Department of Electrical and Electronic Engineering, Melbourne

More information

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77 1/77 ECEN 605 LINEAR SYSTEMS Lecture 7 Solution of State Equations Solution of State Space Equations Recall from the previous Lecture note, for a system: ẋ(t) = A x(t) + B u(t) y(t) = C x(t) + D u(t),

More information

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3, MATH 205 HOMEWORK #3 OFFICIAL SOLUTION Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. a F = R, V = R 3, b F = R or C, V = F 2, T = T = 9 4 4 8 3 4 16 8 7 0 1

More information

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences. Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.. Recall that P 3 denotes the vector space of polynomials of degree less

More information

3 Gramians and Balanced Realizations

3 Gramians and Balanced Realizations 3 Gramians and Balanced Realizations In this lecture, we use an optimization approach to find suitable realizations for truncation and singular perturbation of G. It turns out that the recommended realizations

More information

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems. Chapter 3 Linear Algebra In this Chapter we provide a review of some basic concepts from Linear Algebra which will be required in order to compute solutions of LTI systems in state space form, discuss

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

Key words. n-d systems, free directions, restriction to 1-D subspace, intersection ideal.

Key words. n-d systems, free directions, restriction to 1-D subspace, intersection ideal. ALGEBRAIC CHARACTERIZATION OF FREE DIRECTIONS OF SCALAR n-d AUTONOMOUS SYSTEMS DEBASATTAM PAL AND HARISH K PILLAI Abstract In this paper, restriction of scalar n-d systems to 1-D subspaces has been considered

More information

On the simultaneous diagonal stability of a pair of positive linear systems

On the simultaneous diagonal stability of a pair of positive linear systems On the simultaneous diagonal stability of a pair of positive linear systems Oliver Mason Hamilton Institute NUI Maynooth Ireland Robert Shorten Hamilton Institute NUI Maynooth Ireland Abstract In this

More information

Controllability, Observability & Local Decompositions

Controllability, Observability & Local Decompositions ontrollability, Observability & Local Decompositions Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University Outline Lie Bracket Distributions ontrollability ontrollability Distributions

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1) Travis Schedler Thurs, Nov 17, 2011 (version: Thurs, Nov 17, 1:00 PM) Goals (2) Polar decomposition

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix

Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix journal of optimization theory and applications: Vol. 127, No. 3, pp. 639 663, December 2005 ( 2005) DOI: 10.1007/s10957-005-7508-7 Recursive Determination of the Generalized Moore Penrose M-Inverse of

More information