2 Introduction of Discrete-Time Systems

Size: px
Start display at page:

Download "2 Introduction of Discrete-Time Systems"

Transcription

1 2 Introduction of Discrete-Time Systems This chapter concerns an important subclass of discrete-time systems, which are the linear and time-invariant systems excited by Gaussian distributed stochastic processes In the following, it will be shown how these systems can be represented in state space or equivalently by ARMAV models The modelling of linear timeinvariant stochastic systems using univariate ARMA models has been covered extensively in many textboos such as Box et al [16], Harvey [36], Harvey [37] and Pandit et al [88] It is, however, only recently that modelling using multivariate ARMAV models has been covered systematically, see eg Aoi [11], Gevers et al [29], Hannan et al [33], Pandit [84], Pi et al [92] and Piombo et al [93] Multivariate state space systems have been applied in eg control engineering for several decades, and the analytical manipulations and numerical processing of these systems have been covered extensively in many textboos, see eg Goodwin et al [31], Hannan et al [33], Kailath [48], Ljung [71] and Middleton et al [79] Section 21 will introduce the state space representation of a linear and time-invariant discrete-time system excited by a Gaussian stochastic excitation Besides the applied Gaussian stochastic excitation, a system may also be affected by noise or disturbance In section 22, it is therefore investigated how to account for disturbance by means of a so-called innovation state space representation Section 23, will then show how the state space system presented in section 21 can be converted to an equivalent ARMAV model Similarly, in section 24 it is shown how the innovation state space representation presented in section 22 can be equivalently represented by an ARMAV model The conversions from state space to the particular ARMAV models are unique However, there are many ways to realise ARMAV models in state space, ie ways to select an internal structure of the state space representation In section 25 one of these realizations will be presented 21 Modelling of Discrete-Time Systems In this section a state space representation of a stochastically excited discrete-time system is introduced For simplicity, it will be assumed that the discrete-time system has as many inputs as there are outputs, ie that the dimension of the input process is the same as the dimension of the output process 211 The Stochastic State Space System A linear and time-invariant discrete-time system can be represented in state space as x(t 1 Ax(t B Cx(t D (21 Introduction of Discrete-Time Systems 19

2 The first equation is called the state equation and models the dynamic behaviour of the discrete-time system The second equation is called the observation equation, since this equation controls which part of the output of the representation is observed The input process is assumed to be a zero mean stationary Gaussian white noise, ie E 0 E u T (t n (n (22 where (n is the Kronecer Delta The dimension of u(t is assumed to be p 1, which implies that the dimension of the covariance matrix is p p In the following the statistical properties in (22 will be abbreviated NID(0, Since (21 only involves linear operations the state vector process x(t of the representation and the output process y(t will also be zero-mean Gaussian distributed processes if the initial state vector is zero-mean, ie E 0, E x(t 0 E y T (t n ((n, E x(t x T (t n $(n (23 The statistical properties of these two processes are therefore fully described by the covariance functions ((n and $(n Following the assumption made in the beginning of this section the dimension of is p 1 The dimension of the state vector x(t is assumed to be m 1, with m p Since the state vector has the dimension m, the state space representation is said to be m-dimensional Based on the dimensions of the input, state, and output vectors the dimensions of the system matrices are: / Transition matrix A : dim = m m / Input matrix B : dim = m p / Observation matrix C : dim = p m / Direct term matrix D : dim = p p The state space representation (21 of a discrete-time system is called a stochastic state space system, due to the stochastic input that results in a stochastic response If a particular parameterization of the system matrices has been specified, (21 is referred to as a stochastic state space realization In most cases the direct term, D, in the observation equation will be zero However, there might be cases where this term is needed As an example: If the accelerations of a sampled Gaussian white noise excited second-order system are extracted, it will be necessary to add a direct term, see Hoen [38] Even though such a situation will not occur in this thesis, the presence of a direct term will be allowed for completeness in the following analysis A solution of (21 for y(t to yield +l, l 0, can be obtained by recursive substitutions of itself 20 Modelling of Discrete-Time Systems

3 l CA l x(t M l j1 CA l j Bj 1 Dl (24 NID(0,, x(t x 0 The solution is split into a homogeneous part, which is represented by the first righthand term, and a particular part represented by the second right-hand term 212 Modelling of a Nonwhite Excitation It is also possible to use (21 as a representation of a linear and time-invariant discrete-time system that is excited by nonwhite Gaussian distributed excitation The only requirement is that the nonwhite Gaussian distributed stochastic input can be assumed to be generated from a linear and time-invariant shaping filter, see eg Melsa et al [77] Let the following system that is excited by a nonwhite Gaussian distributed input be represented by the following state space system x 1 (t 1 A 1 x 1 (t B 1 w(t C 1 x 1 (t D 1 w(t (25 and assume that the nonwhite input w(t can be obtained by filtering Gaussian white noise u(t through a linear and time-invariant shaping filter of the form x 2 (t 1 A 2 x 2 (t B 2 w(t C 2 x 2 (t D 2 (26 These state space systems (25 and (26 are coupled, and they can be represented by a single state space system by defining the following augmented state vector x(t x 1 (t x 2 (t (27 Insert the observation equation of (26 into the state equation of (25 and stac the obtained relation on top of the state equation of (26 Further, insert the observation equation of (26 into the observation equation of (25 In other words x 1 (t 1 A 1 x 1 (t B 1 C 2 x 2 (t D 2 x 2 (t 1 A 2 x 2 (t B 2 (28 C 1 x 1 (t D 1 C 2 x 2 (t D 2 Introduction of Discrete-Time Systems 21

4 It is seen that the nonwhite input w(t has been eliminated Introducing the augmented state vector in (28, it is also seen that a state space system in the standard format is obtained, ie a system excited by Gaussian white noise The system matrices of this augmented representation are defined as A A 1 B 1 C 2 0 A 2, B B 1 D 2 B 2, C C 1 D 1 C 2, D D 1 D 2 (29 So in conclusion: / If a linear and time-invariant discrete-time system is subjected to a stochastic input, that can be assumed to be generated by filtering Gaussian white noise through a linear and time-invariant shaping filter, then it will always be possible to represent the system by a Gaussian white noise excited state space system Since it is a typical assumption that the excitation of a civil engineering structure can be modelled as filtered Gaussian white noise, see Hoen [38], Ibrahim [41], Kozin et al [68], Melsa et al [77] and Tajimi [106], the state space representation (21 is assumed to be fully capable of describing the linear and time-invariant dynamic behaviour of such a structure 213 Properties of Stochastic State Space Systems A linear and discrete-time system can be realised in state space in a number of ways Due to this non-uniqueness care must be taen with regards to the parameterization of the system matrices and the dimension of the state space If the dimension is too small there will most certainly be an information loss, compared to the system that is being modelled On the other hand, if the dimension of the state space is too large, the state space realization will contain redundant information Any of these cases should of course be avoided Therefore, a realization should have a minimal state space dimension, where all modes in the system can be observed in the output, and excited by an appropriate input Such a realization is called minimal A realization is only minimal if it is observable and reachable, see Kailath [48] 22 Modelling of Discrete-Time Systems

5 Definition 21 - Observability and Reachability of a State Space Realization An m-dimensional state space system is observable and reachable if the observability matrix Q o(m, and the reachability matrix Q r(m, defined as Q o (m C CA CA m 1, Q r (m B AB A m 1 B (210 both have full ran m, see Kailath [48] If a realization is observable, it is in principle possible to observe all dynamic modes of it, and if it is reachable, it is possible to transfer any initial state x(t to an arbitrary state x(t +m in not more than m steps provided that the reachability matrix Q (m is non-singular, ie has full ran m r The observability and reachability matrices are very important in many parts of system theory and system identification One of the important properties of minimal state space systems is that one realization can be uniquely transformed to another by a similarity transformation Definition 22 - The Similarity Transformation of Minimal Realizations Consider the m-dimensional state space system (21, define the following linear transformation of the state vector a x(t Tx(t (211 where T is an m m non-singular transformation matrix The state space system (21 can then be expressed in terms of the new state vector x(t as x(t 1 à x(t B C x(t D (212 with the new system matrices defined as à TAT 1, B TB, C CT 1, D D (213 This linear transformation of one realization to another is called a similarity transformation a Similarity transformations are frequently used in system theory One of the wellnown similarity transformations is the eigenvalue / -vector decomposition, ie the modal decomposition, see eg Kailath [48] Introduction of Discrete-Time Systems 23

6 22 Modelling of Discrete-Time Systems Affected by Noise Besides the applied input to the state space system (21, there might be other inputs that in a more uncontrollable way contribute to the system response This undesirable influence is characterized as disturbance or noise In system identification using measured system response disturbance might be caused by different phenomena The most obvious phenomena are noise generated by the sensors, and noise arising from roundoff errors during the A/D conversion Disturbance might also be caused by an inadequacy of using a linear and time-invariant model Such an inadequacy could be caused by a small non-linearity in the true system or by non-stationary excitation In any case, noise will always be present in measured data and should therefore always be taen into account It is therefore necessary to extend the stochastic state space system (21 with a noise model 221 Modelling of Disturbance In the context of representing linear and time-invariant discrete-time systems in state space, the different inds of noise are usually divided into two categories: / Process noise / Measurement noise The process noise should be regarded as one equivalent noise term that models all the noise sources causing the linear and time-invariant state space modelling to be inadequate This could be the above-mentioned small non-linearities in the true system The measurement noise should in a similar manner be regarded as an equivalent noise term that includes all the noise sources that disturb the measurements This could be the above mentioned A/D roundoff errors and the sensor noise It will be assumed, that the process noise can be described by a zero-mean Gaussian white noise process w(t, and that the measurement noise in a similar way can be described by a zero-mean Gaussian white noise process v(t The joint distribution of these two processes is given by w(t E v(t 0, E w(t v(t w T (t n v T (t n Q S R (n S T (214 The process noise that disturbs the system should be added to the state equation in (21, whereas the measurement noise should be added to the observation equation In other words x(t 1 Ax(t B w(t Cx(t D v(t ( Modelling of Discrete-Time Systems Affected by Noise

7 This implies that the dimensions of w(t and v(t are m 1 and p 1, respectively It also implies that the covariance matrices describing the disturbance have the following dimensions: / Q : dim = m m / R : dim = p p / S : dim = p m The problem of noise contaminated systems is that it is only possible to predict the response For state space systems, this prediction is accomplished by the construction of the associated Kalman filter 222 The Steady-State Kalman Filter Assume that measurements y(t are available and that they are zero mean and Gaussian distributed Also assume that the matrices {A, B, C, D,, Q, R, S} are nown It is realized that in system identification of civil engineering structures all these matrices are unnown, and some of them are impossible to determine However, in order to carry out the statistical analysis, it is formally assumed that they are nown Now, since it is only possible to predict the system response optimal predictors of the state of the system and of the system response are needed Definition 23 - Optimal Predictor of the State Space System The optimal in least-square sense one-step-ahead predictor ˆx(t of the state vector is defined as the conditional mean of x(t, given all previous measurements, collected in a vector -1-1 T Y defined as the following sequence Y = { y(t, y(t,, y(t } In other words ˆx(t E x(t {y(t 0, y(t 1,, 1 } T E x(t Y 1 (216 The optimal one-step-ahead predictor ŷ(t of the measured response is defined in a -1 similar manner as the conditional mean of, given Y ŷ(t E Y 1 E Cx(t D v(t Y 1 (217 C ˆx(t As seen ŷ(t is related to ˆx(t through the observation matrix C a Introduction of Discrete-Time Systems 25

8 Given the system matrices of the state space system in (215, the measurements, and the stochastic input, the basic idea of Kalman filtering is to predict the state ˆx(t for x(t in the sense that the state prediction error is as small as possible This state prediction error is defined in the following Definition 24 - State Prediction Errors -1 Let J(t represent the part of x(t that cannot be predicted from Y by ˆx(t as J(t ˆx(t x(t (218 This part is termed the state prediction error a Since only observable states can be predicted, and since it must be possible to predict the complete state vector, the system has to be observable, see definition 21 Also, because it is only possible to predict the response of the system itself, there will always be a part of a measurement that cannot be predicted by ŷ(t This part is termed the innovation Definition 25 - Innovations Let e(t represent the part of y(t that cannot be predicted from Y e(t ŷ(t C ˆx(t -1 as (219-1 Thus e(t represents the new information in not contained in Y For this reason e(t is called the innovation Since is assumed zero mean and Gaussian distributed e(t is a zero- mean Gaussian white noise process that is fully described by the covariance matrix In the following these statistical properties will be abbreviated NID(0, a In what follows, it is assumed that all transient behaviour has faded away and that a -1 steady state exists From Y the sequence Y can then be expressed as Y {(Y 1 T, } T (220 Combining this relation with definition 25 and lemma A5 in appendix A, which concerns the properties of orthogonal projections of Gaussian stochastic variables, the following relations for the predictor ˆx(t are obtained 26 Modelling of Discrete-Time Systems Affected by Noise

9 ˆx(t 1 t E[x(t 1 Y ] E[x(t 1 Y 1,e(t ] E[x(t 1 Y 1 ] E[x(t 1 e(t ] (221 ˆx(t 1 E[x(t 1 e T (t ] E[e(t e T (t ] 1 e(t ˆx(t 1 Ke(t where the matrix K is called the steady-state Kalman gain Based on the last relation in (221, it is possible to formulate the steady-state Kalman filter that is associated with the noise affected state space system in (215 Theorem 21 - The Steady-State Kalman Filter of a Stochastically Excited System The steady-state optimal state predictor, described in terms of the Kalman filter of the state space system with stochastic input, is given by ˆx(t 1 t A ˆx(t Ke(t e(t C ˆx(t (222 The first equation is called the update equation and the last equation the correction equation K is the Kalman gain matrix, that includes the description of the disturbance on the system as well as the Gaussian white noise excitation This matrix is defined as K APC T BD T S 1 (223 with being the covariance matrix of the innovations e(t given by CPC T DD T R (224 The covariance matrix P is obtained as the positive semi definite solution to the algebraic Riccati equation P APA T BB T Q KK T (225 Proof: The following proof is a modified version of the traditional Kalman filter proofs found in eg Goodwin et al [31] and Hannan et al [33] The modification lies in the fact that the excitation is usually assumed nown and therefore deterministic Assume that the Gaussian white noise Introduction of Discrete-Time Systems 27

10 excitation u(t is uncorrelated with the disturbance, ie with w(t and v(t The steady-state prediction error covariance P is then defined as P E { ˆx(t x(t }{ ˆx( t x(t } T $ E x(t x T (t, E ˆx(t ˆx T (t $ (226 By taing the conditional mean of the state space equation of (215 given the measurements -1 Y, the following relation is obtained ˆx(t 1 E x(t 1 Y 1 E Ax(t B w(t Y 1 (227 A ˆx(t Substituting (227 into (221 gives the update equation in (222 From (222 the following expectation is obtained E A ˆx(t Ke(t A ˆx(t Ke(t T AA T KK T (228 and from (215 $ is given by $ E Ax(t B w(t A ˆx(t B w(t T A$A T BB T Q (229 Subtracting (229 from (228 and using (226 gives (225 Finally, the covariance T E[x(t e (t ] in (221 is given by +1 E[x(t 1 e T (t ] E[ Ax(t B w(t C[x(t ˆx(t ]D v(t T ] AE[x(t x(t ˆx(t T ]C T BE[ u T (t ]D T E[w(t v T (t ] (230 APC T BD T S since x(t - ˆx(t is orthogonal to ˆx(t The innovation covariance matrix is given by 28 Modelling of Discrete-Time Systems Affected by Noise

11 E[e(t e T (t ] E[ C[x(t ˆx(t ] D v(t C[x(t ˆx(t ] D v(t T ] (231 CPC T DD T R By combining (230 with, the Kalman gain in (223 is obtained Inserting (219 in definition 25 results in the correction equation of the Kalman filter recursion in (222 a Thus, nowing the measured output y(t, the system matrices {A, B, C, D} and the covariance matrices {, Q, R, S}, it is possible to calculate the covariance matrix P, the Kalman gain K, and the innovation covariance matrix before the filtering starts 223 The Innovation State Space System In system identification, it is the system matrices that are unnown This means that the above Kalman filter is unnown However, the filter reveals how the state space system that accounts for the presence of noise should loo lie This special state space system is nown as the innovation state space system, see eg Ljung [71] ˆx(t 1 t A ˆx(t Ke(t, C ˆx(t e(t e(t NID(0, (232 This state space system is obtained from the steady-state Kalman filter by rearranging the correction equation of (222 This system is assumed to be driven by a nown innovation process, which is obtained from contributions of the stochastic input, and the disturbance represented by w(t and v(t The dynamic behaviour of the system is still described by the system matrices A and C, as in the noise-free case This implies that eg the observability matrix Q (m of an m-dimensional innovation o state space system also is given by (210 On the other hand, to obtain the reachability matrix Q r(m of the innovation system the input matrix B in (210 must be substituted by the Kalman gain K Similarity transformations, can still be performed according to definition 22 by setting B = K and D = I A solution of (232 for +l, l 0, can be obtained in a similar fashion as (24 by recursive substitu- tions of itself to yield l CA l ˆx(t M l j1 CA l j Ke(t j 1 e(t l (233 e(t NID(0,, ˆx(t ˆx 0, E[ ˆx 0 ] 0 Introduction of Discrete-Time Systems 29

12 Again the solution is split into a homogeneous part, which is represented by the first right-hand term, and a particular part represented by the second right-hand term So in conclusion: / The innovation state space representation (222 models the dynamic behaviour of a linear and time-invariant discrete-time system, subjected to Gaussian white noise excitation and Gaussian white noise disturbance Therefore, it is a more general description than the representation given in (21 23 ARMAV Modelling of Discrete-Time Systems This section shows how the stochastic state space system is related to a stochastic difference equation system If the applied excitation is Gaussian white noise, this stochastic difference equation system is equivalent to the ARMAV model defined in (12 This section will only focus on representing a discrete-time linear and timeinvariant system that is not affected by disturbance The inclusion of the disturbance will be shown in the next section Just as a continuous-time linear and time-variant system can be represented by a differential equation system, it is possible to represent a discrete-time linear and timeinvariant system by a difference equation system For a p-variate system the equation is A 1 1 A n n f( NID(0, (234 The system response y(t is now described by a homogeneous part, which is an nth order auto-regressive matrix polynomial, and a nonhomogeneous part being a function of the Gaussian white noise process u(t It is assumed that y(t and u(t are vectors of dimension p 1, which implies that all the n auto-regressive coefficient matrices A will have the dimensions p p It is claimed that a general i solution of (234, for an arbitrary 0, can be represented on the following form y h (t y p (t (235 with y p (t being a fixed particular solution, ie one among many solutions, and y h (t a general solution of the homogeneous equation A 1 1 A n n 0 ( ARMAV Modelling of Discrete-Time Systems

13 Since y(t in the homogeneous case can be sequentially computed by A 1 1 A n n (237 the set of solutions of the homogeneous equations is a linear space of dimension np The general solution is therefore given by (24 for an m-dimensional state vector, where m = np, see Gohberg et al [30] The first part of (24 is equal to y h (t and the last part to y p (t in (235 Since the general solution of a minimal system is not restricted to any specific realization, the following relations are in principle valid for any arbitrary m-dimensional minimal realizations This is due to definition 22, stating that all minimal realizations are similar From (24 the general solution to the homogeneous part of the solution is given by y h (t l CA l x(t, l 0, 1, x(t x 0 (238 Inserting (238 into (236 yields CA n A 1 CA n 1 A n 1 CA A n C x(t 0 (239 which can only be fulfilled for an arbitrary choice of x(t, if the contents inside the parentheses equals zero, ie CA n A 1 CA n 1 A n 1 CA A n C 0 (240 This relation reveals an important lin between the auto-regressive coefficient matrices and the system matrices A and C The importance becomes clear in the following theorem Theorem 22 - The ARMAV(n,n Model - Without Noise Modelling A minimal realization of the state space system (21, described by {A, B, C, D, }, can be represented by an ARMAV(n,n model, defined as A 1 1 A 2 2 A n n B 0 B 1 1 B n n (241 NID(0, if the state space realization fulfils the dimensional requirement that the state dimension m, divided by the number of channels p, equals an integral value n The n auto-regressive matrix coefficients, which all have the dimensions p p, are then given by Introduction of Discrete-Time Systems 31

14 A n A n 1 A 2 A 1 CA n Q 1 o (n (242 The n+1 moving average matrix coefficients, also of dimensions p p, are given by B n B n 1 B 1 B 0 A n A n 1 A 1 IT(n1 (243 with T(n+1 defined as T(n1 D CB D 0 0 CA n 2 B CA n 3 B D 0 CA n 1 B CA n 2 B CB D (244 where it should be noticed that B 0 = D The Gaussian white noise input process unaffected by this transformation is Proof: The auto-regressive coefficient matrices are obtained from (240 by the following rearrangement A n A n 1 A 1 Q o (n CA n (245 where the observability matrix, defined in (210 has been introduced It is seen that the autoregressive coefficient matrices can be determined from the system matrices A and C as A n A n 1 A 1 CA n Q 1 o (n (246 This equation is equal to (242 The auto-regressive coefficient matrices can only be determined for state space realizations where Q o (n is non-singular If Q o (n is singular, methods for determination of the coefficient matrices may be found in Gawronsi et al [24] and Kailath [48] The general solution (24 can be used to find an expression for f( in the following way From the observation equation of (21 and (24, construct the following set of equations 32 ARMAV Modelling of Discrete-Time Systems

15 n n1 C CA x(t n 1 CA n 1 CA n D CB D 0 0 n n1 (247 CA n 2 B CA n 3 B D 0 1 CA n 1 B CA n 2 B CB D which can compactly be written as Y(t n,t Q o (n1x(t T(n1U(t n,t (248 Now introduce the matrix containing the auto-regressive coefficient matrices obtained from (242 P A n A n 1 A 1 I (249 and notice that (240 can be written as PQ (n+1 = 0 Multiplying (248 from the left by o (249 then yields PY(t n,t PQ o (n1x(t PT(n1U(t n,t PT(n1U(t n,t (250 The left hand-side of (250 is exactly the auto-regressive part of the difference equation system in (234, and the right-hand term is therefore equivalent to f(, which is an nth order matrix polynomial in Since is a Gaussian white noise this matrix polynomial is a true moving average, which means that (250 really is an ARMAV(n,n model It is seen that it is possible to establish an equivalent ARMAV model to the minimal stochastic state space system This ARMAV model is independent of the actual realization, ie no matter how a p-variate linear and time-invariant discrete-time system subjected to Gaussian white noise excitation is realized, the equivalent ARMAV model will always be the same a Introduction of Discrete-Time Systems 33

16 So in conclusion / Any stochastic state space system, where the state space dimension m divided by the number of observed output p equals an integral value n can be represented by an ARMAV model if the observability matrix Q o(n is non- singular The ARMAV model can also be defined in terms of its transfer function H(q introduced in (12 as A 1 (qb(q H(q A(q I A 1 q 1 A 2 q 2 A n q n B(q B 0 B 1 q 1 B 2 q 2 B n q n (251 where A(q and B(q are the auto-regressive and the moving average matrix polynomials As was the case for the stochastic state space system (21, this ARMAV model is not very useful in practical system identification An ARMAV model based on the innovation state space system will be more useful in this context, since such a model will tae the presence of disturbance into account 24 ARMAV Modelling of Discrete-Time Systems Affected by Noise Establishing an ARMAV model on the basis of the innovation state space system is quite simple, since all the steps involved in such a conversion are the same as shown in the previous section The proof of the following theorem will therefore resemble the proof of theorem 22, and will therefore not be given Theorem 23 - The ARMAV(n,n Model - With Noise Modelling A minimal realization of the innovation state space system (232, described by {A, K, C, }, can be equivalently represented by an ARMAV(n,n model, defined as A 1 1 A 2 2 A n n e(t C 1 e(t 1 C n e(t n e(t NID(0, (252 if the state space realization fulfils the dimensional requirement that the state dimension m, divided by the number of channels p, equals an integral value n The n auto-regressive matrix coefficients, all of dimension p p, are given by 34 ARMAV Modelling of Discrete-Time Systems Affected by Noise

17 A n A n 1 A 2 A 1 CA n Q 1 o (n (253 The n moving average matrix coefficients, also of dimensions p p, are given by C n C n 1 C 1 I A n A n 1 A 1 IT(n1 (254 with T(n+1 defined as T(n1 I CK I 0 0 CA n 2 K CA n 3 K I 0 (255 CA n 1 K CA n 2 K CK I The innovation process e(t is unaffected by this transformation, and is still described the covariance matrix a This ARMAV model can also be defined in terms of its transfer function H(q as A 1 (qc(qe(t H(qe(t A(q I A 1 q 1 A 2 q 2 A n q n C(q I C 1 q 1 C 2 q 2 C n q n (256 where A(q and C(q are the auto-regressive and the moving average matrix polynomials In chapter 5, system identification of multivariate linear and timeinvariant discrete-time systems will be considered, and the models that will be used are the innovation state space system and the ARMAV model derived in this section However, it is still important to be aware of the difference between the ARMAV model that accounts for the presence of noise and the model that does not This is shown in chapter 4 25 A State Space Realization of an ARMAV Model Until now, the actual parameterization of the state space system has not been considered This has not been necessary since the noise modelling and the conversion to ARMAV models are both procedures that are independent of the choice of realization However, to convert an ARMAV model to state space maes it necessary Introduction of Discrete-Time Systems 35

18 to choose a realization There is no unique way of doing this conversion The choice of a state space realization may not be easy If the realization is used as part of some algebraic manipulations, it is a good idea to choose a realization that is easy to construct from the auto-regressive and moving average coefficient matrices On the other hand, if the realization is used in system identification software, the realization must be well-conditioned in order to be numerically efficient In this section a state space realization that is easy to obtain from the coefficient matrices of the ARMAV model will be presented This realization is used on several occasions in chapters 3-6 and is nown as the observability canonical state space realization This realization is constructed on the basis of the auto-regressive coefficient matrices and the impulse response function of the ARMAV model 251 The Impulse Response Function of ARMAV Models The ARMAV(n,n model can equivalently be defined as a convolution of its impulse response function h(j and the Gaussian white noise input M j0 h(j j, NID(0, (257 In this section, relations involving the impulse response function of this model will be given Based on these relations, it is possible to construct the observability canonical state space realization The p p transfer function defined in (251 can also be expressed in terms of the impulse response function and the delay operator q as H(q M j0 h(jq j (258 If y(t is substituted in (251 with (257 and (258 the following relation is obtained I A 1 q 1 A n q n M h(jq j j0 B 0 B 1 q 1 B n q n (259 By comparing the coefficients of equal powers of q, the following relationships between the coefficient matrices of the ARMAV model and its impulse response function h(, for = 0 to n, can be obtained 36 A State Space Realization of an ARMAV Model

19 h(0 B 0 h(1 A 1 h(0 B 1 h(2 A 1 h(1 A 2 h(0 B 2 (260 h(n A 1 h(n 1 A n h(0 B n Based on (260 the following two recursive relations can be derived h( A 1 h( 1 A h(0 B, 0n (261 h( A 1 h( 1 A n h( n, >n (262 The first relation should be used if n, and the second if > n If the moving average polynomial is of lower order than the order of the auto-regressive polynomial, given by n, the remaining moving average coefficient matrices, B i are zeroed in the above expressions The last n equations of (260 can be rearranged as h(1 h(2 I A 1 I B 1 A 1 B 0 B 2 A 2 B 0 h(n A n 1 A n 2 A 1 I B n A n B 0 (263 which can be used for initialization of (261 and ( The Observability Canonical State Space Realization The above relations can be used for prediction of the most probable value of the response +m given the and y(t s, for time steps s From (257 it can be verified that m 1 m M h(jm j M h(jm j M h(jm j j0 j0 jm (264 Introduction of Discrete-Time Systems 37

20 The first sum has zero mean and it only involves inputs 1 to m and is as such independent of what has happened up to the time t The second sum is nown at the time t and the conditional mean ŷ(t m t of +m, given response y(t s, for time steps s, is as such equal to ŷ(t m t M jm h(jm j (265 This equation can be formulated as ŷ(t m t M h(jm j h(m jm1 ŷ(t m h(m (266 and for m = 0 this equation is equal to ŷ(t B 0 (267 since the most probable value of ŷ(t t is itself If (265 is combined with (261, and if mn the following result can be obtained ŷ(t m t M jm h(jm j M jm M n A i i1 n M A i h(j i B m m j i1 M jm h(j im j B m (268 M n i1 A i ŷ(t m i t B m Inserting this result into (266 yields n n ŷ(t m t M A i ŷ(t m i M A i h(m i B m i1 i1 n M A i ŷ(t m i h(m i1 ( A State Space Realization of an ARMAV Model

21 Based on these relations, it is possible to formulate the desired state space realization Theorem 24 - A State Space Realization of the ARMAV Model - Without Noise Modelling A state space realization of the p-variate ARMAV(n,n model (241 is given by x(t 1 Ax(t B, Cx(t D NID(0, (270 with {A, B, C, D} defined as A 0 I I A n A n 1 A 2 A 1 (271 I B 1 A 1 B 0 A 1 I 0 0 B 2 A 2 B 0 B A n 1 A n 2 A 1 I B n A n B 0 (272 C I (273 D B 0 (274 and the state vector x(t defined as Introduction of Discrete-Time Systems 39

22 x(t ŷ(t ŷ(t 1 ŷ(t n 2 ŷ(t n 1 (275 This state space realization is called an observability canonical realization Observability because the associated observability matrix Q o(n is a unity matrix The np np transition matrix A is in its present form also nown as the companion matrix for the auto-regressive matrix polynomial The np p input matrix B consists of n impulse response functions The p np observation matrix C ensures that only the system response is returned Proof: Combine (269 and (266, with increasing prediction horizon, to yield the following set of equations ŷ(t 1 t ŷ(t 1 h(1 ŷ(t 2 t ŷ(t 2 h(2 (276 ŷ(t n 1 t ŷ(t n 1 h(n 1 ŷ(t n t A n ŷ(t A 1 ŷ(t n 1 h(n This set of equations can compactly be written as x(t 1 Ax(t B (277 with x(t defined by (275 and A by (271 The input matrix B is in its present form defined as B h(1 h(2 h(n 1 h(n ( A State Space Realization of an ARMAV Model

23 but by using (263, it can be expressed in terms of the coefficient matrices of the ARMAV model as in (272 The response y(t can be extracted by using (267 as B 0 I x(t B 0 (279 Cx(t D which defines the observation matrix C and the direct term matrix D Equation (277 corresponds to the first equation in (270, and (279 to the second equation a In a similar fashion it is possible to convert the ARMAV model (252 that accounts for the presence of disturbance to an observability canonical state space realization This is shown in the following theorem which will be given without proof Theorem 25 - A State Space Realization of the ARMAV Model - With Noise Modelling A state space realization of the p-variate ARMAV(n,n model (252 is given by ˆx(t 1 t A ˆx(t Ke(t, C ˆx(t e(t e(t NID(0, (280 with {A, K, C} defined as A 0 I I A n A n 1 A 2 A 1 (281 I C 1 A 1 A 1 I 0 0 C 2 A 2 K A n 1 A n 2 A 1 I C n A n (282 Introduction of Discrete-Time Systems 41

24 C I (283 and the state vector ˆx(t defined as ˆx(t ŷ(t ŷ(t 1 ŷ(t n 2 ŷ(t n 1 (284 For univariate ARMA models several canonical state space realization exist, see eg Kailath [48] However, for multivariate systems, the natural analogs of the univariate systems do not necessarily yield minimal realizations This may be a severe drawbac, unless one is able to reduce the realization to a minimal one However, since reliable and applicable model reduction techniques exist, see eg Hoen [38], there is no cause for alarm in using the observability canonical state space realization This particular representation of ARMAV models has been applied by many authors, see eg Andersen et al [6], Aoi [11], Hannan et al [33], Hoen [38], Ljung [71], Pandit et al [86] and Prevosto et al [94] a 26 Summary In this chapter two different representations of multivariate linear and time-invariant discrete-time systems, subjected to Gaussian white noise excitation, have been introduced The two representations are the stochastic state space representation and representation by stochastic ARMAV models These representations provide an equivalent description of the discrete-time system It might seem meaningless to spend time on both representations However, both representations have advantages The manipulation of a state space representation of a system is very simple, since it only involves manipulations of a few system matrices On the other hand, the recursive structure of the ARMAV model might reduce the computational effort in some applications This chapter has concerned noise-free systems as well as systems affected by noise In both cases, a state space representation and an equivalent ARMAV model have been derived In the context of system identification based on measured data, it is very important to account for the presence of noise So in this case, the innovation state space representation or the equivalent ARMAV model that account for the presence of noise should be used On the other hand, in case of 42 Summary

25 analytical studies of linear and time-invariant systems the noise-free representations would probably be preferred Introduction of Discrete-Time Systems 43

Identification of Civil Engineering Structures using Vector ARMA Models Andersen, P.

Identification of Civil Engineering Structures using Vector ARMA Models Andersen, P. Aalborg Universitet Identification of Civil Engineering Structures using Vector ARMA Models Andersen, P. Publication date: 997 Document Version Publisher's PDF, also known as Version of record Link to

More information

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

14 - Gaussian Stochastic Processes

14 - Gaussian Stochastic Processes 14-1 Gaussian Stochastic Processes S. Lall, Stanford 211.2.24.1 14 - Gaussian Stochastic Processes Linear systems driven by IID noise Evolution of mean and covariance Example: mass-spring system Steady-state

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Recursive Identification Algorithms Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2012 Guy Dumont (UBC EECE) EECE 574 -

More information

State Observers and the Kalman filter

State Observers and the Kalman filter Modelling and Control of Dynamic Systems State Observers and the Kalman filter Prof. Oreste S. Bursi University of Trento Page 1 Feedback System State variable feedback system: Control feedback law:u =

More information

Linear Prediction Theory

Linear Prediction Theory Linear Prediction Theory Joseph A. O Sullivan ESE 524 Spring 29 March 3, 29 Overview The problem of estimating a value of a random process given other values of the random process is pervasive. Many problems

More information

Chap 4. State-Space Solutions and

Chap 4. State-Space Solutions and Chap 4. State-Space Solutions and Realizations Outlines 1. Introduction 2. Solution of LTI State Equation 3. Equivalent State Equations 4. Realizations 5. Solution of Linear Time-Varying (LTV) Equations

More information

6.435, System Identification

6.435, System Identification SET 6 System Identification 6.435 Parametrized model structures One-step predictor Identifiability Munther A. Dahleh 1 Models of LTI Systems A complete model u = input y = output e = noise (with PDF).

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

12. Prediction Error Methods (PEM)

12. Prediction Error Methods (PEM) 12. Prediction Error Methods (PEM) EE531 (Semester II, 2010) description optimal prediction Kalman filter statistical results computational aspects 12-1 Description idea: determine the model parameter

More information

Observability and state estimation

Observability and state estimation EE263 Autumn 2015 S Boyd and S Lall Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time observability

More information

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities. 19 KALMAN FILTER 19.1 Introduction In the previous section, we derived the linear quadratic regulator as an optimal solution for the fullstate feedback control problem. The inherent assumption was that

More information

Lecture 19 Observability and state estimation

Lecture 19 Observability and state estimation EE263 Autumn 2007-08 Stephen Boyd Lecture 19 Observability and state estimation state estimation discrete-time observability observability controllability duality observers for noiseless case continuous-time

More information

New Introduction to Multiple Time Series Analysis

New Introduction to Multiple Time Series Analysis Helmut Lütkepohl New Introduction to Multiple Time Series Analysis With 49 Figures and 36 Tables Springer Contents 1 Introduction 1 1.1 Objectives of Analyzing Multiple Time Series 1 1.2 Some Basics 2

More information

Nonlinear Identification of Backlash in Robot Transmissions

Nonlinear Identification of Backlash in Robot Transmissions Nonlinear Identification of Backlash in Robot Transmissions G. Hovland, S. Hanssen, S. Moberg, T. Brogårdh, S. Gunnarsson, M. Isaksson ABB Corporate Research, Control Systems Group, Switzerland ABB Automation

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY & Contents PREFACE xiii 1 1.1. 1.2. Difference Equations First-Order Difference Equations 1 /?th-order Difference

More information

OPTIMAL CONTROL AND ESTIMATION

OPTIMAL CONTROL AND ESTIMATION OPTIMAL CONTROL AND ESTIMATION Robert F. Stengel Department of Mechanical and Aerospace Engineering Princeton University, Princeton, New Jersey DOVER PUBLICATIONS, INC. New York CONTENTS 1. INTRODUCTION

More information

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY Time Series Analysis James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY PREFACE xiii 1 Difference Equations 1.1. First-Order Difference Equations 1 1.2. pth-order Difference Equations 7

More information

Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping

Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping ARC Centre of Excellence for Complex Dynamic Systems and Control, pp 1 15 Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping Tristan Perez 1, 2 Joris B Termaat 3 1 School

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

Dynamic measurement: application of system identification in metrology

Dynamic measurement: application of system identification in metrology 1 / 25 Dynamic measurement: application of system identification in metrology Ivan Markovsky Dynamic measurement takes into account the dynamical properties of the sensor 2 / 25 model of sensor as dynamical

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This

More information

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg

More information

Statistics 910, #15 1. Kalman Filter

Statistics 910, #15 1. Kalman Filter Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

Lecture Note #7 (Chap.11)

Lecture Note #7 (Chap.11) System Modeling and Identification Lecture Note #7 (Chap.) CBE 702 Korea University Prof. Dae Ryoo Yang Chap. Real-time Identification Real-time identification Supervision and tracing of time varying parameters

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Stochastic Processes

Stochastic Processes Stochastic Processes Stochastic Process Non Formal Definition: Non formal: A stochastic process (random process) is the opposite of a deterministic process such as one defined by a differential equation.

More information

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems

Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear Continuous Systems Systems Science and Applied Mathematics Vol. 1 No. 3 2016 pp. 29-37 http://www.aiscience.org/journal/ssam Design of FIR Smoother Using Covariance Information for Estimating Signal at Start Time in Linear

More information

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004 MER42 Advanced Control Lecture 9 Introduction to Kalman Filtering Linear Quadratic Gaussian Control (LQG) G. Hovland 24 Announcement No tutorials on hursday mornings 8-9am I will be present in all practical

More information

Subspace Identification Methods

Subspace Identification Methods Czech Technical University in Prague Faculty of Electrical Engineering Department of Control Engineering Subspace Identification Methods Technical Report Ing. Pavel Trnka Supervisor: Prof. Ing. Vladimír

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Recursive Identification in Closed-Loop and Adaptive Control Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont

More information

Time-Varying Systems and Computations Lecture 3

Time-Varying Systems and Computations Lecture 3 Time-Varying Systems and Computations Lecture 3 Klaus Diepold November 202 Linear Time-Varying Systems State-Space System Model We aim to derive the matrix containing the time-varying impulse responses

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 25 January 2006 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 14 January 2007 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

EL2520 Control Theory and Practice

EL2520 Control Theory and Practice EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller

More information

Kalman Filter and Parameter Identification. Florian Herzog

Kalman Filter and Parameter Identification. Florian Herzog Kalman Filter and Parameter Identification Florian Herzog 2013 Continuous-time Kalman Filter In this chapter, we shall use stochastic processes with independent increments w 1 (.) and w 2 (.) at the input

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati

hapter 8 Simulation/Realization 8 Introduction Given an nth-order state-space description of the form x_ (t) = f (x(t) u(t) t) (state evolution equati Lectures on Dynamic Systems and ontrol Mohammed Dahleh Munther Dahleh George Verghese Department of Electrical Engineering and omputer Science Massachuasetts Institute of Technology c hapter 8 Simulation/Realization

More information

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Date: August 4, 018 Venue: Bergsbrunnagatan 15 sal Responsible teacher: Hans Rosth. Aiding material: Calculator, mathematical handbooks, textbooks

More information

Time Series 2. Robert Almgren. Sept. 21, 2009

Time Series 2. Robert Almgren. Sept. 21, 2009 Time Series 2 Robert Almgren Sept. 21, 2009 This week we will talk about linear time series models: AR, MA, ARMA, ARIMA, etc. First we will talk about theory and after we will talk about fitting the models

More information

Adaptive Channel Modeling for MIMO Wireless Communications

Adaptive Channel Modeling for MIMO Wireless Communications Adaptive Channel Modeling for MIMO Wireless Communications Chengjin Zhang Department of Electrical and Computer Engineering University of California, San Diego San Diego, CA 99- Email: zhangc@ucsdedu Robert

More information

Observability. It was the property in Lyapunov stability which allowed us to resolve that

Observability. It was the property in Lyapunov stability which allowed us to resolve that Observability We have seen observability twice already It was the property which permitted us to retrieve the initial state from the initial data {u(0),y(0),u(1),y(1),...,u(n 1),y(n 1)} It was the property

More information

Model Reduction for Unstable Systems

Model Reduction for Unstable Systems Model Reduction for Unstable Systems Klajdi Sinani Virginia Tech klajdi@vt.edu Advisor: Serkan Gugercin October 22, 2015 (VT) SIAM October 22, 2015 1 / 26 Overview 1 Introduction 2 Interpolatory Model

More information

Econometría 2: Análisis de series de Tiempo

Econometría 2: Análisis de series de Tiempo Econometría 2: Análisis de series de Tiempo Karoll GOMEZ kgomezp@unal.edu.co http://karollgomez.wordpress.com Segundo semestre 2016 IX. Vector Time Series Models VARMA Models A. 1. Motivation: The vector

More information

Abstract. 1 Introduction

Abstract. 1 Introduction Relationship between state-spaceand input-output models via observer Markov parameters M.Q. Phan,' R. W. Longman* ^Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ

More information

Unbiased minimum variance estimation for systems with unknown exogenous inputs

Unbiased minimum variance estimation for systems with unknown exogenous inputs Unbiased minimum variance estimation for systems with unknown exogenous inputs Mohamed Darouach, Michel Zasadzinski To cite this version: Mohamed Darouach, Michel Zasadzinski. Unbiased minimum variance

More information

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56 Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The

More information

Structural System Identification (KAIST, Summer 2017) Lecture Coverage:

Structural System Identification (KAIST, Summer 2017) Lecture Coverage: Structural System Identification (KAIST, Summer 2017) Lecture Coverage: Lecture 1: System Theory-Based Structural Identification Lecture 2: System Elements and Identification Process Lecture 3: Time and

More information

Sequential State Estimation (Crassidas and Junkins, Chapter 5)

Sequential State Estimation (Crassidas and Junkins, Chapter 5) Sequential State Estimation (Crassidas and Junkins, Chapter 5) Please read: 5.1, 5.3-5.6 5.3 The Discrete-Time Kalman Filter The discrete-time Kalman filter is used when the dynamics and measurements are

More information

Correspondence. Sampled-Data Filtering with Error Covariance Assignment

Correspondence. Sampled-Data Filtering with Error Covariance Assignment 666 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 3, MARCH 21 Correspondence Sampled-Data Filtering with Error Covariance Assignment Zidong Wang, Biao Huang, and Peijun Huo Abstract In this correspondence,

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

ECEN 605 LINEAR SYSTEMS. Lecture 8 Invariant Subspaces 1/26

ECEN 605 LINEAR SYSTEMS. Lecture 8 Invariant Subspaces 1/26 1/26 ECEN 605 LINEAR SYSTEMS Lecture 8 Invariant Subspaces Subspaces Let ẋ(t) = A x(t) + B u(t) y(t) = C x(t) (1a) (1b) denote a dynamic system where X, U and Y denote n, r and m dimensional vector spaces,

More information

The norms can also be characterized in terms of Riccati inequalities.

The norms can also be characterized in terms of Riccati inequalities. 9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements

More information

Reliability and Risk Analysis. Time Series, Types of Trend Functions and Estimates of Trends

Reliability and Risk Analysis. Time Series, Types of Trend Functions and Estimates of Trends Reliability and Risk Analysis Stochastic process The sequence of random variables {Y t, t = 0, ±1, ±2 } is called the stochastic process The mean function of a stochastic process {Y t} is the function

More information

Nonlinear State Estimation Methods Overview and Application to PET Polymerization

Nonlinear State Estimation Methods Overview and Application to PET Polymerization epartment of Biochemical and Chemical Engineering Process ynamics Group () onlinear State Estimation Methods Overview and Polymerization Paul Appelhaus Ralf Gesthuisen Stefan Krämer Sebastian Engell The

More information

Linear Approximations of Nonlinear FIR Systems for Separable Input Processes

Linear Approximations of Nonlinear FIR Systems for Separable Input Processes Linear Approximations of Nonlinear FIR Systems for Separable Input Processes Martin Enqvist, Lennart Ljung Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data.

Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data. ISS0031 Modeling and Identification Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data. Aleksei Tepljakov, Ph.D. October 21, 2015 Discrete-time Transfer Functions

More information

CBE495 LECTURE IV MODEL PREDICTIVE CONTROL

CBE495 LECTURE IV MODEL PREDICTIVE CONTROL What is Model Predictive Control (MPC)? CBE495 LECTURE IV MODEL PREDICTIVE CONTROL Professor Dae Ryook Yang Fall 2013 Dept. of Chemical and Biological Engineering Korea University * Some parts are from

More information

FIR Filters for Stationary State Space Signal Models

FIR Filters for Stationary State Space Signal Models Proceedings of the 17th World Congress The International Federation of Automatic Control FIR Filters for Stationary State Space Signal Models Jung Hun Park Wook Hyun Kwon School of Electrical Engineering

More information

A New Subspace Identification Method for Open and Closed Loop Data

A New Subspace Identification Method for Open and Closed Loop Data A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems

More information

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD Least Squares Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 75A Winter 0 - UCSD (Unweighted) Least Squares Assume linearity in the unnown, deterministic model parameters Scalar, additive noise model: y f (

More information

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M. TIME SERIES ANALYSIS Forecasting and Control Fifth Edition GEORGE E. P. BOX GWILYM M. JENKINS GREGORY C. REINSEL GRETA M. LJUNG Wiley CONTENTS PREFACE TO THE FIFTH EDITION PREFACE TO THE FOURTH EDITION

More information

Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case

Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case 008 American Control Conference Westin Seattle Hotel Seattle Washington USA June -3 008 WeB8.4 Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case

More information

Further Results on Model Structure Validation for Closed Loop System Identification

Further Results on Model Structure Validation for Closed Loop System Identification Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification

More information

KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE)

KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE) PROBABILITY AND MATHEMATICAL STATISTICS Vol 29, Fasc 1 (29), pp 169 18 KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE BY ANTONY G AU T I E R (LILLE)

More information

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω ECO 513 Spring 2015 TAKEHOME FINAL EXAM (1) Suppose the univariate stochastic process y is ARMA(2,2) of the following form: y t = 1.6974y t 1.9604y t 2 + ε t 1.6628ε t 1 +.9216ε t 2, (1) where ε is i.i.d.

More information

Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter

Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter Journal of Physics: Conference Series PAPER OPEN ACCESS Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter To cite this article:

More information

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides

Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering. Stochastic Processes and Linear Algebra Recap Slides Prof. Dr.-Ing. Armin Dekorsy Department of Communications Engineering Stochastic Processes and Linear Algebra Recap Slides Stochastic processes and variables XX tt 0 = XX xx nn (tt) xx 2 (tt) XX tt XX

More information

Multivariate ARMA Processes

Multivariate ARMA Processes LECTURE 8 Multivariate ARMA Processes A vector y(t) of n elements is said to follow an n-variate ARMA process of orders p and q if it satisfies the equation (1) A 0 y(t) + A 1 y(t 1) + + A p y(t p) = M

More information

Lecture 11. Andrei Antonenko. February 26, Last time we studied bases of vector spaces. Today we re going to give some examples of bases.

Lecture 11. Andrei Antonenko. February 26, Last time we studied bases of vector spaces. Today we re going to give some examples of bases. Lecture 11 Andrei Antonenko February 6, 003 1 Examples of bases Last time we studied bases of vector spaces. Today we re going to give some examples of bases. Example 1.1. Consider the vector space P the

More information

Parameter Estimation in a Moving Horizon Perspective

Parameter Estimation in a Moving Horizon Perspective Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE

More information

Industrial Model Predictive Control

Industrial Model Predictive Control Industrial Model Predictive Control Emil Schultz Christensen Kongens Lyngby 2013 DTU Compute-M.Sc.-2013-49 Technical University of Denmark DTU Compute Matematiktovet, Building 303B, DK-2800 Kongens Lyngby,

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fourth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada Front ice Hall PRENTICE HALL Upper Saddle River, New Jersey 07458 Preface

More information

ARIMA Modelling and Forecasting

ARIMA Modelling and Forecasting ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first

More information

Title. Description. var intro Introduction to vector autoregressive models

Title. Description. var intro Introduction to vector autoregressive models Title var intro Introduction to vector autoregressive models Description Stata has a suite of commands for fitting, forecasting, interpreting, and performing inference on vector autoregressive (VAR) models

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Chapter 6: Controllability & Observability Chapter 7: Minimal Realizations May 2, 217 1 / 31 Recap State space equation Linear Algebra Solutions of LTI and LTV system Stability

More information

State estimation and the Kalman filter

State estimation and the Kalman filter State estimation and the Kalman filter PhD, David Di Ruscio Telemark university college Department of Technology Systems and Control Engineering N-3914 Porsgrunn, Norway Fax: +47 35 57 52 50 Tel: +47 35

More information

PERFORMANCE ANALYSIS OF CLOSED LOOP SYSTEM WITH A TAILOR MADE PARAMETERIZATION. Jianhong Wang, Hong Jiang and Yonghong Zhu

PERFORMANCE ANALYSIS OF CLOSED LOOP SYSTEM WITH A TAILOR MADE PARAMETERIZATION. Jianhong Wang, Hong Jiang and Yonghong Zhu International Journal of Innovative Computing, Information and Control ICIC International c 208 ISSN 349-498 Volume 4, Number, February 208 pp. 8 96 PERFORMANCE ANALYSIS OF CLOSED LOOP SYSTEM WITH A TAILOR

More information

Identification of MIMO linear models: introduction to subspace methods

Identification of MIMO linear models: introduction to subspace methods Identification of MIMO linear models: introduction to subspace methods Marco Lovera Dipartimento di Scienze e Tecnologie Aerospaziali Politecnico di Milano marco.lovera@polimi.it State space identification

More information

NONLINEAR INTEGRAL MINIMUM VARIANCE-LIKE CONTROL WITH APPLICATION TO AN AIRCRAFT SYSTEM

NONLINEAR INTEGRAL MINIMUM VARIANCE-LIKE CONTROL WITH APPLICATION TO AN AIRCRAFT SYSTEM NONLINEAR INTEGRAL MINIMUM VARIANCE-LIKE CONTROL WITH APPLICATION TO AN AIRCRAFT SYSTEM D.G. Dimogianopoulos, J.D. Hios and S.D. Fassois DEPARTMENT OF MECHANICAL & AERONAUTICAL ENGINEERING GR-26500 PATRAS,

More information

Econ 4120 Applied Forecasting Methods L10: Forecasting with Regression Models. Sung Y. Park CUHK

Econ 4120 Applied Forecasting Methods L10: Forecasting with Regression Models. Sung Y. Park CUHK Econ 4120 Applied Forecasting Methods L10: Forecasting with Regression Models Sung Y. Park CUHK Conditional forecasting model Forecast a variable conditional on assumptions about other variables. (scenario

More information

ME 132, Fall 2015, Quiz # 2

ME 132, Fall 2015, Quiz # 2 ME 132, Fall 2015, Quiz # 2 # 1 # 2 # 3 # 4 # 5 # 6 Total NAME 14 10 8 6 14 8 60 Rules: 1. 2 sheets of notes allowed, 8.5 11 inches. Both sides can be used. 2. Calculator is allowed. Keep it in plain view

More information

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1) EL 625 Lecture 0 EL 625 Lecture 0 Pole Placement and Observer Design Pole Placement Consider the system ẋ Ax () The solution to this system is x(t) e At x(0) (2) If the eigenvalues of A all lie in the

More information

Properties of Zero-Free Spectral Matrices Brian D. O. Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE

Properties of Zero-Free Spectral Matrices Brian D. O. Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 54, NO 10, OCTOBER 2009 2365 Properties of Zero-Free Spectral Matrices Brian D O Anderson, Life Fellow, IEEE, and Manfred Deistler, Fellow, IEEE Abstract In

More information

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems

Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part II: Observability and Stability James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of Wisconsin

More information

Published in: Proceedings of the 17th International Modal Analysis Conference (IMAC), Kissimmee, Florida, USA, February 8-11, 1999

Published in: Proceedings of the 17th International Modal Analysis Conference (IMAC), Kissimmee, Florida, USA, February 8-11, 1999 Aalborg Universitet ARMA Models in Modal Space Brincker, Rune Published in: Proceedings of the 17th International Modal Analysis Conference (IMAC), Kissimmee, Florida, USA, February 8-11, 1999 Publication

More information

2.5 Forecasting and Impulse Response Functions

2.5 Forecasting and Impulse Response Functions 2.5 Forecasting and Impulse Response Functions Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables

More information

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1 Charles P. Coleman October 31, 2005 1 / 40 : Controllability Tests Observability Tests LEARNING OUTCOMES: Perform controllability tests Perform

More information

12.4 Known Channel (Water-Filling Solution)

12.4 Known Channel (Water-Filling Solution) ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity

More information

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State

More information