L 2 Model Reduction and Variance Reduction

Size: px
Start display at page:

Download "L 2 Model Reduction and Variance Reduction"

Transcription

1 echnical report from Automatic Control at Linköpings universitet L 2 Model Reduction and Variance Reduction Fredrik järnström, Lennart Ljung Division of Automatic Control fredrikt@iys.liu.se, ljung@isy.liu.se 25th June 2007 Report no.: LiH-ISY-R-280 Accepted for publication in Automatica, 2002 Address: Department of Electrical Engineering Linköpings universitet SE Linköping, Sweden WWW: AUOMAIC CONROL REGLEREKNIK LINKÖPINGS UNIVERSIE echnical reports from the Automatic Control group in Linköping are available from

2 Abstract In this contribution we examine certain variance properties of model reduction. he focus is on L 2 model reduction, but some general results are also presented. hese general results can be used to analyze various other model reduction schemes. he models we study are nite impulse respons (FIR and output error (OE models. We compare the variance of two estimated models. he rst one is estimated directly form data and the other is computed bt reducing a high order model by L 2 model reduction. In the FIR case, se show that it is never better to estimate the model directly from data, compared to estimating it via L 2 model reduction of a high order FIR model. For OE models we show that the reduced order model has the same variance as the directly estimated one if the reduced model class used contains thr true system. Keywords: identication, model reduction, variance reduction

3 Automatica 38 ( L 2 Model reduction and variance reduction F. jarnstrom,l.ljung Department of Electrical Engineering, Linkopings Universitet, SE Linkoping, Sweden Received June 200; received in revised form 4 January 2002; accepted 6 April 2002 Abstract In this contribution we examine certain variance properties of model reduction. he focus is on L 2 model reduction, but some general results are also presented. hese general results can be used to analyze various other model reduction schemes. he models we study are nite impulse response (FIR and output error (OE models. We compare the variance of two estimated models. he rst one is estimated directly from data and the other one is computed by reducing a high order model, by L 2 model reduction. In the FIR case we show that it is never better to estimate the model directly from data, compared to estimating it via L 2 model reduction of a high order FIR model. For OE models we show that the reduced model has the same variance as the directly estimated one if the reduced model class used contains the true system.? 2002 Elsevier Science Ltd. All rights reserved. Keywords: Model reduction; Identication; Variance reduction. Introduction here are many methods available for model reduction, e.g., balanced reduction (Moore, 98, Hankel-norm model reduction (Glover, 984, and L 2 model reduction (Spanos, Milman, & Mingori, 992. he main objective, using any of these methods, is to compress a given representation of a system into a less complex one, without losing much information. One of the most extreme examples of this is the actual identication phase, where the model consisting of input output data, Z N, is mapped into an nth (N n order parameterized one. In the standard setting (see Section 2 this corresponds to nding the best L 2 approximation of data (given a model class. Irrespectively how the reduction phase is performed (Moore, 98; Glover, 984; Spanos et al., 992, it will make it possible to keep track of the bias errors that the reduction step gives rise to. here has, however, been little discussion on how the variance of the high order estimated model maps over to the low order one. Since the variance error strongly aects the use and interpretation of the reduced model it is in many cases at least as important as the bias error. In this paper, we discuss this topic, or his paper was not presented at any IFAC meeting. his paper was recommended for publication in revised form by Associate Editor Brett Ninness under the direction of Editor orsten Soderstrom. Corresponding author. address: fredrikt@isy.liu.se (F. jarnstrom. more precisely, the problem of computing the variance of the reduced model. We start by introducing notation and discussing some facts about system identication in Section 2. Some inspiration about the L 2 model reduction problem is given in Section 3. In Section 4 related approaches on estimating the variance of the reduced model are discussed. General formulas describing the covariance of the low order model are presented in Section 5. In Section 6 we explicitly compute the covariance matrix when the reduced models are of nite impulse response (FIR type. Section 7 states the main results, i.e., that variance of the reduced model is the same as the variance of the directly estimated model. his is proved in Section 8. A simulation example is presented in Section 9, and some conclusions are given in Section Prediction error methods hroughout the paper, we denote the input signal by u(t, the output signal by y(t, and N is the total number of measured data. We assume that y(t is generated according to y(tg 0 (qu(t+v(t; v(th 0 (qe(t; ( where G 0 (q is a linear time-invariant system, usually referred to as the true system, q is the discrete-time shift operator, i.e., qu(tu(t +. Furthermore, we assume that /02/$ - see front matter? 2002 Elsevier Science Ltd. All rights reserved. PII: S (

4 58 F. jarnstrom, L. Ljung / Automatica 38 ( the additive noise, v(t, is independent of the input, u(t, and that it is a ltered version of an independent and identically distributed noise sequence e(t with variance. he noise lter H 0 (q h i q i ; h 0 ; (2 i0 is assumed to be monic and inversely stable. he models we t to data are parameterized by a d-dimensional real-valued parameter vector, i.e., y(tg(q; u(t+v(t: (3 More specically we study FIR and output error (OE models. hese are parameterized by G(q; B(q; F(q; ; B(q; b q n k + + b nb q n k n b + ; F(q; +f q + + f nf q n f ; (b ::: b nb f ::: f nf ; (4 where F(q; in the FIR case. We dene a loss function as the mean of the squared sum of the prediction errors (in this case the output errors V N ( N j 2 (t; ; (5 N j(t; y(t ŷ(t y(t G(q; u(t: (6 he estimate of is taken as the minimizer of (5 N ˆ N arg minv N ( arg min j 2 (t; ; (7 N i.e., we use prediction error methods (PEM. he basic result is then (Ljung, 999, Chapter 8 that under weak conditions ˆ N arg min Ej 2 (t; as N : (8 hat is, ˆN converges to the best model provided by the model class. If the true system belongs to the model class, ˆ N converges to the true parameter vector, 0, that satises G(e i! ; 0 G 0 (e i! for almost all!. If the minimizer is not unique, ˆ N converges to some value in the set of minimizers. o avoid lack of uniqueness one can regularize the loss function. his means that (5 is replaced by W N (V N ( (9 for a 0 and some minimizing V N (. See also Ljung (999, pp he expression for the distribution of the estimate is based on the central limit theorem, assuming global identi- ability and some other weak conditions (see Ljung (999, Chapter 9. We present it together with a general expression for the covariance of the parameter estimates assuming that the output error model (3 is used, see Kabaila (983 and Ljung (999, Chapter 9. N( ˆN 0 AsN (0;P ; (0 P [E (t; 0 (t; 0 ] [E (t; 0 (t; 0 ][E (t; 0 (t; 0 d d (; 0 (t; 0 ] ; ( j(t; ; (2 0 h i ( + i; 0 : (3 i0 When the noise, v(t, actually is white, the covariance expression simplies to P [E (t; 0 (t; 0 ] : (4 he regularized versions of ( and (4 are P [E (t [E (t P [E (t [E (t (t+i] [E (t (t] (t+i] ; (5 (t+i] [E (t (t] (t+i] ; (6 respectively (in somewhat shorthand notation. he calculation of the distributions for other statistics is based on a linear approximation of the mapping from the parameter distribution given by (0 to the statistic of interest. his mapping is usually referred to as Gauss approximation formula. It states that if ˆ N is suciently close to E ˆ N, we can make the approximation Cov f( ˆ [f ( ]P [f ( ] [f ( ˆ N ]P [f ( ˆ N ] : (7 he quality of this approximation increases as the size of ˆ N decreases. Furthermore, if ˆ N is asymptotically Gaussian distributed, so is f( ˆ N. 3. Model reduction o estimate a low order model, G(e i! ;, of a system, several possibilities exist. he most obvious one is to directly estimate a lower order model from data (7. As known from, e.g., Ljung (999, the predictionoutput error estimate automatically gives models that are L 2 approximations of the true system in a frequency-weighted norm, determined

5 F. jarnstrom, L. Ljung / Automatica 38 ( by the input spectrum and noise model ˆ d N arg min G 0 (e i! G(e i! ; 2 u (!d! as N ; (8 where u (! is the input spectrum. his is just a restatement of (8. A second possibility is to estimate a high order model which is then subjected to model reduction to the desired order. See, e.g., Wahlberg (989. For the model reduction step, a variety of methods could be applied, like truncating balanced state space realizations, or applying L 2 norm reduction. he latter method means that the low order model, parameterized by is determined as ˆ r N arg min G(e i! ; ˆ N G(e i! ; 2 W (!d!: (9 Here, G(q; ˆ N is the high order (estimated model, and W (! is a weighting function. An important question is whether this reduction step also implies a reduction of variance, i.e., if the variance of G(e i! ; ˆ r N (viewed as random variable through its dependence of the estimate G(e i! ; ˆ N is lower than that of G(e i! ; ˆ N. A second question is how this variance compares with the one obtained by the direct identication method, i.e., G(e i! ; ˆ d N. he somewhat surprising answer is that (9 may in some cases give a lower variance than (7. Let us consider a simple, but still illustrating example. Note that throughout the paper, the expectation is taken over both u and e. Example. Consider the true system y(tu(t +0:5u(t 2 + e(t; (20 where the input u is white noise with variance ; and e is white noise with variance. We compare two ways of nding a rst-order model of this system. First; estimate b d in the FIR model directly from data ŷ(t b d b d u(t : his gives the estimate (using least squares ˆb d N ; with ˆb d N E ˆb d N asn : he variance of ˆb d N is computed as E( ˆb d N 2 E ( N u(t (0:5u(t 2 + e(t N u2 (t +0:25 : N Note here that expectation is taken over both u and e. his is essential for the results of this contribution and is used in the rest of this paper. 2 he second method is to estimate a high order model (in this example second order ŷ(t ŷ(t b ;b 2 b u(t + b 2 u(t : his gives the estimated transfer function G(q; ˆ N ˆb ;N q + ˆb 2;N q 2 with ˆb d i;n tending to their true values, and each having an asymptotic variance of (N. Now, subjecting G(e i! ; ˆ N to the L 2 model reduction (9 to an FIR( model with W (! gives the reduced model G(q; ˆ r N ˆb r N q ˆb ;N q : he variance of the directly estimated rst-order model is Var ˆb d N +0:25 ; N while the L 2 reduced model has Var ˆb r N Var ˆb ;N N ; i.e., it is strictly smaller. he prediction error methods are ecient in these cases (assuming that e is white and normal, i.e., their variances meet the Cramer Rao bound if the model structure contains the true system (and the measurement noise is white and Gaussian. In those cases no other estimation method can beat the direct estimation method. Still, in this example it was strictly better to estimate the low order model, both in terms of variance and mean square error, by reducing a high order model than to estimate it directly from data. his somewhat unexpected result can clearly only happen if the low order model structure does not contain the true system. 4. Other approaches Before going into the actual calculation we discuss some related approaches. Some contributions that take into account that the high order model is obtained through an identication experiment when performing model reduction are Porat and Friedlander (985, Porat (986, Soderstrom, Stoica, and Friedlander (99, Stoica and Soderstrom (989, Zhu and Backx (993, Chapter 7, Wahlberg (987, 989, jarnstrom and Ljung (200, jarnstrom (2002, and Hsia (977, Chapter 7. he contributions by Porat and Friedlander study ARMA parameter estimation via covariance estimates. hese papers contain similar tools as the ones presented in Section 5. However, the ideas only apply to time series models. he following contributors deal with models having input signals. hese approaches are briey summarized in this section. Soderstrom et al. (99 look at nested model structures. In particular they look for structures that can be embedded

6 520 F. jarnstrom, L. Ljung / Automatica 38 ( in larger structures which are easy to estimate, such as ARX structures. After estimating the high order structure they reduce the estimate to the low order structure in a weighted non-linear least-squares sense. he method is called an indirect prediction error method. We illustrate the idea using the generalized least-squares structure. Assume that the low order structure is of ARARX type, i.e., A(qy(tB(qu(t+ e(t; (2 D(q where the polynomials A(q; B(q, and D(q are of orders n a ;n b and n d, respectively. he structure is parameterized by (a ::: a na b ::: b nb d ::: d nd : (22 Now, rewrite this structure as a high order ARX structure by multiplying with D(q, i.e., A(q; D(q; y(tb(q; D(q; u(t+e(t (23 where R(q; y(ts(q; u(t+e(t; (24 (r ::: r nr s ::: s ns : Note here that dim n a +n b +n d dim n a +n b +2n d. he relation between and is a non-linear mapping given by (23 and (24, i.e., F (. Now, can be estimated using standard least squares and is found by minimizing ˆ arg min (F ( ˆ N ˆP (F ( ˆ N ; where ˆP is an estimate of the covariance of. It is shown that the statistical properties of this indirect method are the same as for standard PEM, but does in many cases use fewer computations to come up with the nal estimate. Wahlberg (987 uses an approach similar to the one in Soderstrom et al. (99. First an nth-order FIR model parameterized by is estimated and is then reduced to a lower order model G(q; subject to ˆ arg min (F 2 ( ˆ N R N (F 2 ( ˆ N ; where F 2 (R N N G(q; u(t (t; (t(u(t ::: u(t n ; R N N (t (t: It is shown that the estimate of is asymptotically ecient, i.e., its covariance matrix meets the Cramer Rao bound as the FIR order, n, tends to innity (in case of white Gaussian noise. Note that both of these approaches (Wahlberg, 987; Soderstrom et al., 99 can coincide with L 2 model reduction, e.g., if is a linear function of. his is the case when both and parameterize an FIR model. Zhu and Backx (993 use another approach. hey start by estimating an high order ARX model of order his model ( n N ; ˆB n N is asymptotically unbiased in model order and data, with a variance equal to the noise to signal ratio multiplied by the model order, n, divided by the number of data, i.e., Var Ĝ n N (e i! n v (! N u (! : (25 See Ljung (999, 985, Zhu (989. Using the estimate  n N ; ˆB n N a new input and output sequence is generated from the old input, u(t, according to u f (tâ n N (qu(t; y f (t ˆB n N (q  n N (q u f(t: (26 A low order OE model is then estimated from the simulated data {y f (t; u f (t} N. his approach is asymptotically ef- cient (in model order and data. 5. he basic tools o ease the notation, the subscript N in the estimates will be dropped from now on, i.e., we use ˆ ˆ N,ˆ r ˆ r N, and ˆ d ˆ d N. o translate the variance of one estimate ˆ to another ˆf( ˆ we use Gauss approximation formula (7. o use this result to compute the variance of an L 2 reduced model, we need an (asymptotic expression for how it depends on the high order model. For this we return to (9. Let the high order model be parameterized by, with estimate ˆ. Let parameterize a low order model and dene ˆ( ˆ arg min J (; ˆ (27 for some function J, that depends on the lower order model and the high order, estimated, model ˆ. For L 2 -reduction we use J (; ˆ G(e i! ; G(e i! ; ˆ 2 W (!d!; (28 but the form of J is immaterial for the moment. We assume it to be dierentiable, though. Now, since ˆ minimizes J (; ˆ, we have J (ˆ( ˆ; ˆ0; (29 where J denotes the partial derivative of J with respect to its rst argument. By denition (29 holds for all ˆ, so

7 F. jarnstrom, L. Ljung / Automatica 38 ( taking the total derivative with respect to ˆ gives 0 d d ˆ J (ˆ( d ˆ; ˆJ (ˆ( ˆ; ˆ ˆ( ˆ+J (ˆ( ˆ; ˆ d ˆ or d ˆ( ˆ [J (ˆ( d ˆ ˆ; ˆ] J (ˆ( ˆ; ˆ: (30 his expression for the derivative, and Gauss approximation formula (7, now give the translation of the variance of ˆ to that of ˆ: P N Cov ˆ [J ( ; ] J ( ; P where J ( ; [J ( ; ] ; (3 lim ˆ (32 N and ( : (33 his gives us a general expression for investigating variance reduction for any reduction technique that can be written as (27. Especially it holds for L 2 reduced estimates ( he FIR case In this section we look at systems of FIR structure. We show the perhaps surprising result that estimating a high order model followed by L 2 model reduction never gives higher variance than directly estimating the low order model. Note here once again that the expectation is taken over both u and e in all calculations. Suppose that data is generated by an FIR system with d d + d 2 parameters, i.e., d d y(t b k u(t k+ b k u(t k+e(t k kd + 0 (t+ 0 2 (t+e(t 0 (t+e(t; (34 where e is white noise with variance, and u is a stationary stochastic process, independent of e, with spectrum u (!. he denitions of ; ; ; and (t should be immediate from (34: 0 (b ::: b d ; (35 (t(u(t ::: u(t d ; (36 etc. Let us also introduce the notation R E (t (t; R 2 E (t 2 (tr 2; R 22 E 2 (t 2 (t: (37 Note that the true frequency function can thus be written G 0 (e i! (e i! ::: e di! 0 : (38 We now seek the best L 2 approximation (in the frequency weighting norm u (! of this system of order d : arg min G 0 (e i! G(e i! ; 2 u (!d! arg min E( 0 (t (t 2 ; (39 where the second step is Parseval s identity. Simple calculations show that the solution is [E (t (t] E (t (t 0 R (R R 2 ( R R 2 0 : ( Direct estimate Now, the least-squares estimate ˆ d (in the following called the direct estimate of order d is [ N ] N ˆ d (t (t (ty(t [ N ] N 0 + (t (t (t 2 (t 0 [ N ] N + (t (t (te(t; (4 where the second step follows from (34. his gives that E ˆ d (42 with an approximation error of order N, cf. (72. Using instead of E ˆ d in the covariance calculations results in an error of order N 2. his does not aect the results since the covariance expressions are correct of order N. Moreover, the approximation involved also concerns the indicated inverse. When N is large the law of large numbers can be applied to give the result. (A technical comment: In the denition of the estimate, one may have to truncate for close-to-singular matrices. See Appendix 9.B in

8 522 F. jarnstrom, L. Ljung / Automatica 38 ( Ljung (999 for such technicalities. Moreover Cov ˆ d E(ˆ d E ˆ d ( ˆ d E ˆ d E(ˆ d ( ˆ d [ N ] N E (t (t (te(t [ N ] N (t (t (te(t [ N ] N + E (t (t (t 2 (t 0 R R 2 0 [ N (t (t N (t 2 (t 0 R R 2 0 ] ; N R + EH N 0 0 HN ; (43 where [ N ] H N (t (t [ N ] (t 2 (t [R ] R 2 : ( Reduced estimate Let us now turn to the model reduction case. We rst estimate the full system of order d using least squares. hat gives the estimate ˆ with E ˆ 0 (45 and ( Cov ˆ N [E (t (t] R R 2 (46 N R 2 R 22 with obvious partitioning according to (37. We insert this high order estimate into (28 using a frequency weighting W (! u (! and perform the model reduction (27. Note that, by Parseval s relation, (28 also can be written J (; ˆE( (t ˆ (t 2 ; (47 cf. (39. Here (t is constructed from u as in (34, and where u has the spectrum W (! u (!. In the notation of (29 we have J (; ˆE (t (tr ; J ˆ (; ˆE (t (t E (t( (t 2 (t(r R 2 : (48 From (3, (46, and (48 we now nd that the covariance of the reduced estimate equals ( ( Cov ˆ r R (R R 2 R R 2 R R N R 2 R 22 R 2 N R ; (49 where the last step simply follows from the denition of an inverse matrix. Comparing with (43 we see that this variance is strictly smaller than that obtained by direct identication, provided 0 0, that is, the true system is of higher order than d. However, if the true system is of order d we also nd that the reduced model reaches the Cramer Rao bound (if e(t is Gaussian, i.e., Cov ˆ r N R : (50 he conclusion from this is that the variance of the reduced FIR model is never higher than the variance obtained by direct estimation. Comments: We remark that the variance reduction is related to performing the reduction step correctly. If (47 is approximated by the sample sum over the same input data as used to estimate ˆ it follows that the reduced estimate is always equal to the direct one. his corresponds to choosing the weighting function equal to the discrete Fourier transform of the used input sequence W (! U N (! 2 ; (5 N U N (! u(te i!t : (52 N Moreover, the variance reduction can be traced to the fact that the approximation aspect of the direct estimation method depends on the nite sample properties of u over ;:::;N. If expectation is carried out only with respect to e we have (see (40 and (4 E e ˆ d + H N 0 and this is the reason for the increased variance in the direct method. 7. Main result he result that it may be advantageous to use L 2 model reduction of a high order estimated model, rather than to directly estimate a low order one is intriguing. Using the basic

9 F. jarnstrom, L. Ljung / Automatica 38 ( tools, more general situations can be investigated. Here we focus on general OE model structures. We assume that the low order model structure contains the true system, i.e., we look at the case of no undermodeling. his is somewhat simplied from the general case where undermodeled low order models are included, but necessary to complete the proof. In jarnstrom (2002 recent results on the undermodeling case are discussed. Let the underlying system be given by y(tg 0 (qu(t+v(t B 0(q F 0 (q u(t+v(t; v(th 0 (qe(t (53 with the same assumptions on e and u as in (34. Parameterize two OE model structures G(q; and G(q; where dim dim, i.e., (b ::: b nb f ::: f nf (54 (b ::: b nb0 f ::: f nf0 ; (55 where n b n b0 and n f n f0. Furthermore, we assume the existence of some and a unique such that G(e i! ; G(e i! ; G 0 (e i! (56 for almost all!, and that no other parameterization with fewer parameters than dim fulll (56. Or in other words, the true model order is [n b0 n f0 ]. We now state the main theorem, which is proved in the next section. heorem 2 (Reduced model variance. Assume that the true system is given by y(tg 0 (qu(t+v(t; where v(th 0 (qe(t; and e(t is white noise with variance and u is a stationary stochastic process independent of v; with known spectrum u (!. We assume that u and e have bounded fourth-order moments. Furthermore; we assume that G(q; and G(q; (with dim dim are two model structures of OE type (4 that both contain the true system G 0 (q; and that no other parameterization with fewer parameters than contains the true system. Let ˆ N minimize the regularized loss function WN ( (given by (9 and ˆ r N minimize J (; ˆ N G(e i! ; G(e i! ; ˆ N 2 u (!d!: Let the direct estimate be dened by ˆ d N arg min V N (; where V N is given by (5. hen the asymptotic variance of ˆ r N tends to the variance of the direct estimate ˆd N as 0; i.e.; lim lim N Cov N ˆr N lim N Cov ˆ d N : N 0 Moreover; we nd that the reduced model meets the Cramer Rao bound if the measurement noise is white and Gaussian. 8. Proof of the main result In this section we present the proof of heorem 2. First we prove the theorem in the case that the measurement noise is white, i.e., H 0 (q. After that we prove the result for general H 0. Note from (54 and (55 that the parameters form a subset of. his can be written as S 0 ; (57 where S 0 (I ::: I nb0 I nb + ::: I nb +n f0 (58 and I j is the jth column of the (n b +n f (n b +n f identity matrix. he gradients of ŷ(t; and ŷ(t; equal (see (2 and (t; d d B(q; G(q; u(t d d F(q; u(t q n k 0. q n k 0 n b + u(t (59 q G(q; F(q;. q n f G(q; (t; d d G(q; u(t d B(q; d F(q; u(t q n k 0. q n k 0 n b0 + u(t: (60 q G(q; F(q;. q n f 0 G(q; By observing that B(q; F(q; G 0(q; (6 we nd that B(q; B 0 (ql(q and F(q; F 0 (ql(q: (62

10 524 F. jarnstrom, L. Ljung / Automatica 38 ( Here L(q is a monic FIR lter of length r + and r min(n b n b0 ;n f n f0 ; (63 i.e., L(q+l q + + l r q r l k q k ; (64 k0 where we use the convention that l 0. We also obviously have that B(q; F(q; G 0(q: (65 Putting (59, (6, and (62 together gives q n k 0. (t; q n k 0 n b + u(t: (66 q G 0 (q L(qF 0 (q. q n f G 0 (q In the same way we get from (60, and (65 q n k 0. (t; q n k 0 n b0 + u(t: (67 q G 0 (q F 0 (q. q n f 0 G0 (q From these two expressions and utilizing (57 we get the important relation (t; S 0 L(q (t; : (68 Let us now consider (28 with W (! u (!: J (; ˆ G(e i! ; G(e i! ; ˆ 2 u (!d! E[(G(q; G(q; ˆ u(t] 2 E 2 (t; ; ˆ (69 with obvious denition of 2 (t; ; ˆ. Note that ˆ should be regarded as xed (independent of u in this expression and that (t; ; 0; t (70 according to (56. Dene as before ˆ r arg min J (; ˆ: (7 From the discussion in Ljung (999, Appendix 9.B it follows that the dierence between E ˆ r and (dened by (33 is small, i.e., E ˆ r 6 C (72 N for some constant C according to Ljung (999, Eq. (9B.3. So the limiting estimate of the two-step method (estimation and reduction gives approximately the same limiting estimate as the direct estimation method. In order to calculate the variance of the reduced order model we need to derive the expressions for J ( ; and J ˆ ( ; from (69: J (; ˆE (t; (t; ; ˆ; (73 J (; ˆE(t; ; ˆ d d (t; +E (t; (t; ; (74 J (; ˆ E (t; (t; ˆ: (75 According to (70 the rst term in (74 vanishes in ( ;. Evaluating the last two expressions at ( ; gives J ( ; J (; ˆ E (t; ; ˆ (t; ; (76 J ( ; E (t; (t; : (77 Next the covariance function of the gradient (t; is dened as R (E (t + ; E (t; (t; (t ; (78 and similarly for (t;. his allows us to write [E (t; (t; +I] (R (0 + I R (0; (79 where the last equality is the denition of R (0. We continue by giving a lemma regarding rank decient matrices. Lemma 3. Let A be a n n-dimensional positive semidefinite symmetric matrix of rank m 6 n. Dene à A + I with 0. hen the following holds: (i à A Aà I Ã. (ii lim 0 + à 0; 0. Proof. (i I à à à (A + I à A I Ã. he other equality follows similarly.

11 F. jarnstrom, L. Ljung / Automatica 38 ( (ii Since A is symmetric it follows that A UDU (80 with D diag(d ;:::;d m ; 0;:::;0 and UU U U I. Adding I to both sides of (80 gives A + I U(D + IU : Inverting both sides gives (since U U Ã U(D + I U : Hence we get + Ã U DU ; ( + D diag d + ;:::; + d m + ; ;:::; : From this it follows that lim 0 + Ã U lim 0 DU U0U 0; 0: Before presenting the next lemma we extend the denition of S 0 in (58 to S k (I k+ ::: I k+nb0 I k+nb + ::: I k+nb +n f0 : (8 Lemma 4. Let (t; ;R (; and R ( be given by (66 and (78. hen it holds that: (ii his is proved using (68 and (i: R (E (t; (t ; ES 0 L(q (t; L(q ES 0 l m q m (t; m0 m0 n0 m0 n0 m0 n0 (t ; S 0 l n q n (t ; S 0 n0 l m l n ES0 (t m; l m l n ESm (t; l m l n SmR (S n : (t n ; S 0 (t ; S n We are now ready to prove heorem 2 in the case of H 0 (q. Estimation of the high order system G(q; by minimizing W N ( gives ˆ with covariance Cov ˆ N [E (t; (t; +I] (t; ] (t; +I] (82 according to (6. Putting (3, (77, and (82 together we nd that Cov ˆ r [E (t; (t; ] [E (t; (t; ] (i (t k; S 0 (t; S k ; 0 6 k 6 r. (ii R ( r m0 r k0 l ml k S mr (S k. Proof. (i First; let ( j denote the jth element of the vector. Studying the jth; 6 j 6 n b k; element of (t; ; where 0 6 k 6 r; gives ( ( (t k; j q n k 0 k j+ L(qF 0 (q u(t N [E (t; (t; +I] (t; ] (t; +I] (t; ][E (t; We would like to show that (83 tends to (t; ] (83 : ( (t; k+j : Cov ˆ d N [E (t; (t; ] (84 Similarly for n b +6j6n b + n f k we get ( ( (t k; j q k j+ G 0 (q L(qF 0 (q u(t ( (t; k+j : Now the multiplication (t k; S 0 picks out the rst n b0 elements and elements with indices between n b + and n b +n f0 from (t k;, whereas (t; S k picks out elements shifted k steps away (relatively to S 0 from (t;. his means that we pick out exactly those elements corresponding to each other by the multiplication with S 0 and S k. as 0, which is the covariance ˆ would have if it had been estimated directly from the data {u(t;y(t} N. his can equivalently be stated as [E (t; (t; ] lim 0 [E (t; (t; ] (t; +I] (t; ] (t; +I] (t; ]: (85

12 526 F. jarnstrom, L. Ljung / Automatica 38 ( Using (68 we get E (t; (t; ES 0 L(q (t; (t; l m ES0 (t m; m0 l m ESm (t; m0 l m SmR (0; m0 (t; (t; where we used Lemma 4(i. Plugging this into the right-hand side of (85 and using Lemma 3(i gives Letting 0, the second term vanishes and last two sums vanish according to Lemma 3(ii. Moreover, the rst term equals R (0 E (t; (t; according to Lemma 4(ii, and the result follows. Since the direct estimate meets the Cramer Rao bound if the noise is white and Gaussian, we get that the reduced model also meets the Cramer Rao bound in this case. Before presenting the proof of the theorem in the general non-white measurement noise case, we need to state another lemma. Lemma 5. For R ( dened by (78 and (66 and R (0 dened by (79 it holds that lim R (0R (0: 0 [E (t; (t; ][E (t; (t; +I] Proof. Let (t; ][E (t; (t; ] ( l m SmR (0 m0 R (0 ( R (0 R (0 l n R (0S n R (0 ( ( m0 ( ( n0 m0 n0 n0 l m Sm(I R (0 l n (I R (0S n (t; +I] l m Sm(R (0 (I R (0 l n (I R (0S n m0 n0 l m l n S m(r (0 (I R (0 (I R (0 2 S n m0 n0 l m l n S m(r (0 2I +3 2 R (0 3 ( R (0 2 S n : x(t L(qF 0 (q u(t; x(t G 0 (qx(t; then we can rewrite (66 as q n k 0. q n k 0 n b + (t; q G 0 (q L(qF 0 (q u(t. q n f G 0 (q x(t n k0. x(t n k0 n b + : x(t. x(t n f Since G 0 (qb 0 (qf 0 (q weget B 0 (qx(t+f 0 (q x(t0 or in matrix notation

13 F. jarnstrom, L. Ljung / Automatica 38 ( (b ::: b nb0 f ::: f nf0 x(t n k0. x(t n k0 n b0 x(t 0; t: (86 x(t 2. x(t n f0 his can be expressed in terms of the gradient (t; as 0 0;b ;:::;b nb0 ; 0;:::;0; ;f }{{} ;:::;f nf0 ; 0;:::;0 }{{} n b n b0 n f n f0 (t; w (t; ; (87 i.e.; w is orthogonal to the gradient. Moreover; since we know that the rank deciency of R (0 equals r (see (63 we realize that it is possible to construct a total of r time-independent vectors; w ;:::;w r ; that are orthogonal to (t; from the relation (86. hese have the same structure as w in (87; but the non-zero entries are shifted downwards ; e.g.; w r (0;:::;0;b ;:::;b nb0 ; 0;:::;0; ;f ;:::;f nf0 : Since w :::;w r are orthogonal to (t; it follows that they are also eigenvectors to R ( since w k R (w k E (t; (t ; E0 (t ; 0; k :::;r: From this it follows that the singular value decomposition (SVD of R ( is of the form ( ( 0 V ; R ((U ; U 2 ; 0 0 V2 where U 2 (w w r ; V 2 (w w r ; diag( ; ;:::; nf +n b r;; k; 0; k ;:::;n b + n f r; k;0 0; k :::;n b + n f r: (88 Here the subindex is included to indicate a possible dependency of. Note the strict inequality for k;0. Now since U 2 and V 2 are independent of it follows that: V2 U ; V;U 2 0; V2 U 2 I: Moreover; we have that the SVD of R (0(R (0+I equals ( (I + R 0 ( 0 V ;0 (0(U ;0 U 2 0 I : V2 Putting all of the above together we get ( ( (I + R 0 0 V ;0 (0R ((U ;0 U 2 0 I ( ( 0 V ; (U ; U V 2 (U ;0 (I + 0 U 2 ( ( V ;0 U ; 0 V; 0 I 0 V 2 U ;0 (I + 0 V ;0U ; V ; 0; as 0; where the last statement follows from (88. We are now ready to continue with the proof of heorem 2 in the case of non-white measurement noise, i.e., H 0 (q. From ( we know that the covariance of the direct estimate, ˆ d, equals Cov ˆ d N ; [E (t; (t; ] [E (t; (t; ][E (t; (t; ] and the covariance of the L 2 reduced estimate, ˆ r, equals (see (83 Cov ˆ r N [E (t; (t; ] [E (t; (t; ] (t; +I] [E (t; (t; ] (t; +I] (t; ][E (t; (t; ] : Showing equality between these two expressions as 0 is the same as showing that E (t; (t; lim 0 [E (t; [E (t; (t; ] (t; ][E (t; (t; +I] [E (t; (t; +I] (t; ]:

14 528 F. jarnstrom, L. Ljung / Automatica 38 ( Expressing the left- and right-hand side of this equation in terms of the covariance function gives h k h l R (k l k0 l0 lim 0 m0 R (0 l m S mr (0 R (0 k0 l0 h k h l R (k l l n R (0S n : (89 n0 Continuing to expand the right-hand side of (89 using Lemma 3(i we get RHS m0 l m S m(i R (0 h k h l R (k l k0 l0 m0 n0 k0 l0 n0 l n (I R (0S n l n l m Sm h k h l R (k ls n l n l m Sm h k h l R (0R (k l m0 n0 m0 n0 + 2 l n l m m0 n0 k0 l0 k0 l0 k0 l0 h k h l R (k l R (0S n l n l m Sm h k h l R (0 R (k l R (0S n : Here the second and third term tend to zero as 0 due to Lemma 5. he fourth term also tends to zero since R (0 is bounded for small (see the proof of Lemma 3(ii. In short lim RHS 0 l n l m Sm h k h l R (k ls n m0 n0 k0 l0 k0 l0 h k h l k0 l0 m0 n0 h k h l R (k l; l n l m SmR (k ls n where the last equality follows from Lemma 4(ii. Looking back at (89 we see that theorem is proved. 9. Example o illustrate the results from the previous section we give a simple simulation example. he true system is given by the following OE-structure: y(t B 0(q F 0 (q u(t+v(t; F 0 (q 0:7q +0:52 2 0:092q 3 0:904q 4 ; B 0 (q2q q 2 : (90 he system is estimated using N 000 input output data. Dierent noise and input colors are used. A total of four dierent evaluations of the L 2 model reduction scheme are presented: ( white input and white noise, i.e., u(t w (t; v(t w 2 (t, (2 colored input and white noise, i.e., u(t u (qw (t; v(tw 2 (t, (3 white input and colored noise, i.e., u(tw (t; v(t v (qw 2 (t, (4 colored input and colored noise, i.e., u(t u (qw (t; v(t v (qw 2 (t. Here w (t and w 2 (t are white Gaussian processes with variance, and u (q and v (q are given by 0:5 u (q :2q ; (9 +0:7q 2 0:9 v (q : (92 +0:5q he bode diagrams of u and v are displayed in Fig. together with the true system, G 0 B 0 F 0. he evaluation is performed according to the following. An OE model of order 6 is estimated in each case, giving ˆ, and reduced in L 2 norm to the correct order, giving Phase (deg Magnitude (db _ 0 _ 20 0 _ 90 _ 80 _ 270 _ 360 _ 450 _ _ 2 Bode Diagram 0 _ Frequency (rad/sec Fig.. Bode diagram of true system, G 0 (e i! (solid, noise color v(e i! (dashed, and input color, u(e i! (dash dotted. 0 0

15 F. jarnstrom, L. Ljung / Automatica 38 ( unfiltered e and unfiltered u.04 unfiltered e and filtered u Loss reduced Loss reduced Loss direct Loss direct (a White noise and white input (b White noise and coloured input. filtered e and unfiltered u.4 filtered e and filtered u Loss reduced Loss reduced Loss direct (c Coloured noise and white input Loss direct (d Coloured noise and coloured input Fig. 2. Results from 000 simulations. Y -axis: loss function on validation data using L 2 model reduction. X -axis: loss function on validation data using direct estimation. Every cross represents one simulation. he solid line is y x, and the dashed line is the mean square estimate of a line y x + from the simulations. he circle represents the mean of all simulations.(a White noise and white input. (b White noise and colored input. (c Colored noise and white input. (d Colored noise and colored input. ˆ r. he reduced model is estimated in the following way. A new input sequence, u s (t of length 0N is generated with spectrum u (!. hen a new output sequence is simulated according to y s (t G(q; ˆu s (t. Using these input output data the low order model ˆ r is estimated. his procedure (of simulating new data slightly increases the variance of ˆ r (compared to performing the minimization of (28, but this error is of order 0N and can therefore be neglected. Another OE model, ˆ d, of correct order is also estimated directly from the original data. In order to avoid local minima, the estimation algorithm is initiated at the optimum. o illustrate the results in this contribution graphically, we chose to project the six-dimensional covariance matrices down to one-dimensional scalars, namely the variance of the prediction error for each model. hat is, for the two models (the directly estimate and the reduced one the loss functions on validation data is calculated. he result of this is plotted in Fig. 2. his is repeated 000 times (giving one cross in each gure for every estimate. Figs. 2(a (d correspond to items 4 in the list above, respectively. From the results presented in Fig. 2, we see that the loss function on validation data follows the straight line y x very accurately in all four cases. his gives us a good conrmation on the results in Section 7, i.e., that variance of the reduced model equals the variance of the directly estimated one (asymptotically.

16 530 F. jarnstrom, L. Ljung / Automatica 38 ( Conclusions he main result of this paper is that applying L 2 model reduction to an identied model gives essentially optimal reduction of the variance of that model. In particular, it follows from our results that: If the true system is of a certain order n, and a higher order model of output error type is rst estimated and then L 2 reduced in the u (! norm to order n, then the variance of that model is the same as if an nth-order output error model is directly estimated from data. If a high order FIR model is estimated from data in a structure that correctly can describe the system, and this model is L 2 reduced to a lower order, then we in general obtain a model with smaller variance than a directly estimated low order FIR model. his implies that high order output error modeling followed by L 2 model reduction makes optimal use of the information contents in data if the measurement noise is white and Gaussian and the true system is of OE type. hen both the direct and the reduced estimates meet the Cramer Rao lower bound. his cannot be outperformed by other model reduction techniques. All the results are derived taking expectations over both u and e. Dierent results are obtained if expectation is taken only over e. Note also that the results in this paper are based on that model reduction is performed in the L 2 norm weighted in the true input spectrum. he results may be quite dierent if the weighting is chosen as an estimate of the input spectrum. In general the low order model has some bias. Having arrived at the simple model by model reduction of a high order model gives an estimate of the bias as the dierence between the two models. At the same time the variance of the low order model is kept small according to the results in this paper for FIR models and according to jarnstrom (2002 for general linear output error models. his gives advantages over a directly estimated low order model, which has higher variance, and a bias error which requires special measures to assess. References Glover, K. (984. All optimal hankel-norm approximations of linear multivariable systems and their L -error bounds. International Journal of Control, 39(6, Hsia,. C. (977. Identication: Least squares methods. Lexington, MA: Lexington Books. Kabaila, P. V. (983. On output-error methods for system identication. IEEE ransactions on Automatic Control, 28, Ljung, L. (985. Asymptotic variance expressions for identied black-box transfer function models. IEEE ransactions on Automatic Control, 30(9, Ljung, L. (999. System identication: heory for the user (2nd ed.. Upper Saddle River, NJ: Prentice-Hall. Moore, B. (98. Principal component analysis in linear systems: Controllability, observability and model reduction. IEEE ransactions on Automatic Control, 26, 7 3. Porat, B. (986. On the estimation of the parameters of vector Gaussian processes from sample covariances. In Proceedings of the 25th Conference on Decision and Control, Athens, Greece (pp Porat, B., & Friedlander, B. (985. Asymptotic accuracy of ARMA parameter estimation methods based on sample covariances. In Preprints 7th IFAC symposium on identication and system parameter estimation, York, UK (pp Soderstrom,., Stoica, P., & Friedlander, B. (99. An indirect prediction error method for system identication. Automatica, 27, Spanos, J.., Milman, M. H., & Mingori, D. L. (992. A new algorithm for L 2 optimal model reduction. Automatica, 28(5, Stoica, P., & Soderstrom,. (989. On reparameterization of loss functions used in estimation and the invariance principle. Signal Processing, 7, jarnstrom, F. (2002. Variance aspects of L 2 model reduction when undermodeling the output error case. In Proceedings of the 5th IFAC World Congress, Barcelona, Spain. jarnstrom, F., & Ljung, L. (200. Variance properties of a two-step ARX estimation procedure. In Proceedings of the European control conference, Porto, Portugal (pp Wahlberg, B. (987. On the identication and approximation of linear systems. Ph.D. thesis 63, Department of Electrical Engineering, Linkoping University. Wahlberg, B. (989. Model reduction of high order estimated models: he asymptotic ML approach. International Journal of Control, 49(, Zhu, Y. -C. (989. Black-box identication of MIMO transfer functions: Asymptotic properties of prediction error models. International Journal of Adaptive Control and Signal Processing, 3, Zhu, Y., & Backx,. (993. Identication of multivariable industrial processes for simulation, diagnosis and control. Berlin: Springer. Fredrikjarnstromwasbornin Ornskoldsvik, Sweden in 973. He received the M.Sc. degree in Applied Physics and Electrical Engineering in 997 and the Ph.D. degree in automatic control in 2002, both from University of Linkoping. Currently he is a research associate in the Automatic Control group, Department of Electrical Engineering, Linkoping University, Linkoping, Sweden. His research topics include system identication and its connection to model reduction, bootstrap techniques and identication of nonlinear systems. Lennart Ljung received his Ph.D. in Automatic Control from Lund Institute of echnology in 974. Since 976 he is Professor of the chair of Automatic Control In Linkoping, Sweden, and is currently Director of the Competence Center Information Systems for Industrial Control and Supervision (ISIS. He has held visiting positions at Stanford and MI and has written several books on System Identication and Estimation. He is an IEEE Fellow and an IFAC Advisor as well as a member of the Royal Swedish Academy of Sciences (KVA, a member of the Royal Swedish Academy of Engineering Sciences (IVA, and an Honorary Member of the Hungarian Academy of Engineering. He has received honorary doctorates from the Baltic State echnical University in St Petersburg, and from Uppsala University. In 2002 he received the Quazza Medal from IFAC.

17 Avdelning, Institution Division, Department Datum Date Division of Automatic Control Department of Electrical Engineering Språk Language Svenska/Swedish Engelska/English Rapporttyp Report category Licentiatavhandling Examensarbete C-uppsats D-uppsats Övrig rapport ISBN ISRN Serietitel och serienummer itle of series, numbering ISSN URL för elektronisk version LiH-ISY-R-280 itel itle L 2 Model Reduction and Variance Reduction Författare Author Fredrik järnström, Lennart Ljung Sammanfattning Abstract In this contribution we examine certain variance properties of model reduction. he focus is on L 2 model reduction, but some general results are also presented. hese general results can be used to analyze various other model reduction schemes. he models we study are nite impulse respons (FIR and output error (OE models. We compare the variance of two estimated models. he rst one is estimated directly form data and the other is computed bt reducing a high order model by L 2 model reduction. In the FIR case, se show that it is never better to estimate the model directly from data, compared to estimating it via L 2 model reduction of a high order FIR model. For OE models we show that the reduced order model has the same variance as the directly estimated one if the reduced model class used contains thr true system. Nyckelord Keywords identication, model reduction, variance reduction

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,

More information

A relaxation of the strangeness index

A relaxation of the strangeness index echnical report from Automatic Control at Linköpings universitet A relaxation of the strangeness index Henrik idefelt, orkel Glad Division of Automatic Control E-mail: tidefelt@isy.liu.se, torkel@isy.liu.se

More information

LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples

LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples LTI Approximations of Slightly Nonlinear Systems: Some Intriguing Examples Martin Enqvist, Lennart Ljung Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581

More information

On Identification of Cascade Systems 1

On Identification of Cascade Systems 1 On Identification of Cascade Systems 1 Bo Wahlberg Håkan Hjalmarsson Jonas Mårtensson Automatic Control and ACCESS, School of Electrical Engineering, KTH, SE-100 44 Stockholm, Sweden. (bo.wahlberg@ee.kth.se

More information

An efficient implementation of gradient and Hessian calculations of the coefficients of the characteristic polynomial of I XY

An efficient implementation of gradient and Hessian calculations of the coefficients of the characteristic polynomial of I XY Technical report from Automatic Control at Linköpings universitet An efficient implementation of gradient and Hessian calculations of the coefficients of the characteristic polynomial of I XY Daniel Ankelhed

More information

Properties and approximations of some matrix variate probability density functions

Properties and approximations of some matrix variate probability density functions Technical report from Automatic Control at Linköpings universitet Properties and approximations of some matrix variate probability density functions Karl Granström, Umut Orguner Division of Automatic Control

More information

On Consistency of Closed-loop Subspace Identifictaion with Innovation Estimation

On Consistency of Closed-loop Subspace Identifictaion with Innovation Estimation Technical report from Automatic Control at Linköpings universitet On Consistency of Closed-loop Subspace Identictaion with Innovation Estimation Weilu Lin, S Joe Qin, Lennart Ljung Division of Automatic

More information

Block diagonalization of matrix-valued sum-of-squares programs

Block diagonalization of matrix-valued sum-of-squares programs Block diagonalization of matrix-valued sum-of-squares programs Johan Löfberg Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping, Sweden WWW:

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Time-domain Identication of Dynamic Errors-in-variables Systems Using Periodic Excitation Signals Urban Forssell, Fredrik Gustafsson, Tomas McKelvey Department of Electrical Engineering Linkping University,

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of

More information

Model Reduction using a Frequency-Limited H 2 -Cost

Model Reduction using a Frequency-Limited H 2 -Cost Technical report from Automatic Control at Linköpings universitet Model Reduction using a Frequency-Limited H 2 -Cost Daniel Petersson, Johan Löfberg Division of Automatic Control E-mail: petersson@isy.liu.se,

More information

Linköping University Electronic Press

Linköping University Electronic Press Linköping University Electronic Press Report Simulation Model of a 2 Degrees of Freedom Industrial Manipulator Patrik Axelsson Series: LiTH-ISY-R, ISSN 400-3902, No. 3020 ISRN: LiTH-ISY-R-3020 Available

More information

developed by [3], [] and [7] (See Appendix 4A in [5] for an account) The basic results are as follows (see Lemmas 4A and 4A2 in [5] and their proofs):

developed by [3], [] and [7] (See Appendix 4A in [5] for an account) The basic results are as follows (see Lemmas 4A and 4A2 in [5] and their proofs): A Least Squares Interpretation of Sub-Space Methods for System Identication Lennart Ljung and Tomas McKelvey Dept of Electrical Engineering, Linkoping University S-58 83 Linkoping, Sweden, Email: ljung@isyliuse,

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING An Alternative Motivation for the Indirect Approach to Closed-loop Identication Lennart Ljung and Urban Forssell Department of Electrical Engineering Linkping University, S-581 83 Linkping, Sweden WWW:

More information

Experimental Comparison of Methods for Multivariable Frequency Response Function Estimation

Experimental Comparison of Methods for Multivariable Frequency Response Function Estimation Technical report from Automatic Control at Linköpings universitet Experimental Comparison of Methods for Multivariable Frequency Response Function Estimation Erik Wernholt, Stig Moberg Division of Automatic

More information

1 Introduction When the model structure does not match the system, is poorly identiable, or the available set of empirical data is not suciently infor

1 Introduction When the model structure does not match the system, is poorly identiable, or the available set of empirical data is not suciently infor On Tikhonov Regularization, Bias and Variance in Nonlinear System Identication Tor A. Johansen SINTEF Electronics and Cybernetics, Automatic Control Department, N-7034 Trondheim, Norway. Email: Tor.Arne.Johansen@ecy.sintef.no.

More information

Experimental evidence showing that stochastic subspace identication methods may fail 1

Experimental evidence showing that stochastic subspace identication methods may fail 1 Systems & Control Letters 34 (1998) 303 312 Experimental evidence showing that stochastic subspace identication methods may fail 1 Anders Dahlen, Anders Lindquist, Jorge Mari Division of Optimization and

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

A New Subspace Identification Method for Open and Closed Loop Data

A New Subspace Identification Method for Open and Closed Loop Data A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Recursive Identification in Closed-Loop and Adaptive Control Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont

More information

Further Results on Model Structure Validation for Closed Loop System Identification

Further Results on Model Structure Validation for Closed Loop System Identification Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification

More information

A COMPARISON OF TWO METHODS FOR STOCHASTIC FAULT DETECTION: THE PARITY SPACE APPROACH AND PRINCIPAL COMPONENTS ANALYSIS

A COMPARISON OF TWO METHODS FOR STOCHASTIC FAULT DETECTION: THE PARITY SPACE APPROACH AND PRINCIPAL COMPONENTS ANALYSIS A COMPARISON OF TWO METHODS FOR STOCHASTIC FAULT DETECTION: THE PARITY SPACE APPROACH AND PRINCIPAL COMPONENTS ANALYSIS Anna Hagenblad, Fredrik Gustafsson, Inger Klein Department of Electrical Engineering,

More information

form the corresponding predictions (3) it would be tempting to use ^y(tj) =G(q )u(t)+(ih (q ))(y(t)g(q )u(t)) (6) and the associated prediction errors

form the corresponding predictions (3) it would be tempting to use ^y(tj) =G(q )u(t)+(ih (q ))(y(t)g(q )u(t)) (6) and the associated prediction errors Some Results on Identifying Linear Systems Using Frequency Domain Data Lennart Ljung Department of Electrical Engineering Linkoping University S-58 83 Linkoping, Sweden Abstract The usefulness of frequency

More information

Robust Control Of A Flexible Manipulator Arm: A Benchmark Problem

Robust Control Of A Flexible Manipulator Arm: A Benchmark Problem Robust Control Of A Flexible Manipulator Arm: A Benchmark Problem Stig Moberg, Jonas Öhr Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,

More information

sine wave fit algorithm

sine wave fit algorithm TECHNICAL REPORT IR-S3-SB-9 1 Properties of the IEEE-STD-57 four parameter sine wave fit algorithm Peter Händel, Senior Member, IEEE Abstract The IEEE Standard 57 (IEEE-STD-57) provides algorithms for

More information

Reduced rank regression in cointegrated models

Reduced rank regression in cointegrated models Journal of Econometrics 06 (2002) 203 26 www.elsevier.com/locate/econbase Reduced rank regression in cointegrated models.w. Anderson Department of Statistics, Stanford University, Stanford, CA 94305-4065,

More information

Problem Description We remark that this classication, however, does not exhaust the possibilities to assess the model quality; see e.g., Ljung and Guo

Problem Description We remark that this classication, however, does not exhaust the possibilities to assess the model quality; see e.g., Ljung and Guo Non-Stationary Stochastic Embedding for Transfer Function Graham C. Goodwin eegcg@cc.newcastle.edu.au Estimation Julio H. Braslavsky julio@ee.newcastle.edu.au Department of Electrical and Computer Engineering

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

NP-hardness of the stable matrix in unit interval family problem in discrete time

NP-hardness of the stable matrix in unit interval family problem in discrete time Systems & Control Letters 42 21 261 265 www.elsevier.com/locate/sysconle NP-hardness of the stable matrix in unit interval family problem in discrete time Alejandra Mercado, K.J. Ray Liu Electrical and

More information

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho

The model reduction algorithm proposed is based on an iterative two-step LMI scheme. The convergence of the algorithm is not analyzed but examples sho Model Reduction from an H 1 /LMI perspective A. Helmersson Department of Electrical Engineering Linkoping University S-581 8 Linkoping, Sweden tel: +6 1 816 fax: +6 1 86 email: andersh@isy.liu.se September

More information

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances Journal of Mechanical Engineering and Automation (): 6- DOI: 593/jjmea Cramér-Rao Bounds for Estimation of Linear System oise Covariances Peter Matiso * Vladimír Havlena Czech echnical University in Prague

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Estimation Focus in System Identication: Preltering, Noise Models, and Prediction Lennart Ljung Department of Electrical Engineering Linkping University, S-81 83 Linkping, Sweden WWW: http://www.control.isy.liu.se

More information

Frequency-Domain Identification of Continuous-Time Output ErrorModels Part I - Uniformly Sampled Data and Frequency Function Estimation

Frequency-Domain Identification of Continuous-Time Output ErrorModels Part I - Uniformly Sampled Data and Frequency Function Estimation Technical report from Automatic Control at Linöpings universitet Frequency-Domain Identification of Continuous-Time Output ErrorModels Part I - Uniformly Sampled Data and Frequency Function Estimation

More information

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems The Evil of Supereciency P. Stoica B. Ottersten To appear as a Fast Communication in Signal Processing IR-S3-SB-9633 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems Signal Processing

More information

f-domain expression for the limit model Combine: 5.12 Approximate Modelling What can be said about H(q, θ) G(q, θ ) H(q, θ ) with

f-domain expression for the limit model Combine: 5.12 Approximate Modelling What can be said about H(q, θ) G(q, θ ) H(q, θ ) with 5.2 Approximate Modelling What can be said about if S / M, and even G / G? G(q, ) H(q, ) f-domain expression for the limit model Combine: with ε(t, ) =H(q, ) [y(t) G(q, )u(t)] y(t) =G (q)u(t) v(t) We know

More information

Sound Listener s perception

Sound Listener s perception Inversion of Loudspeaker Dynamics by Polynomial LQ Feedforward Control Mikael Sternad, Mathias Johansson and Jonas Rutstrom Abstract- Loudspeakers always introduce linear and nonlinear distortions in a

More information

A Weighting Method for Approximate Nonlinear System Identification

A Weighting Method for Approximate Nonlinear System Identification Technical report from Automatic Control at Linköpings universitet A Weighting Method for Approximate Nonlinear System Identification Martin Enqvist Division of Automatic Control E-mail: maren@isy.liu.se

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

Frequency-Domain Identification of Continuous-Time ARMA Models from Sampled Data

Frequency-Domain Identification of Continuous-Time ARMA Models from Sampled Data Technical report from Automatic Control at Linköpings universitet Frequency-Domain Identification of Continuous-Time ARMA Models from Sampled Data Jonas Gillberg, Lennart Ljung Division of Automatic Control

More information

System analysis of a diesel engine with VGT and EGR

System analysis of a diesel engine with VGT and EGR System analysis of a diesel engine with VGT and EGR Master s thesis performed in Vehicular Systems by Thomas Johansson Reg nr: LiTH-ISY-EX- -5/3714- -SE 9th October 25 System analysis of a diesel engine

More information

Departement Elektrotechniek ESAT-SISTA/TR About the choice of State Space Basis in Combined. Deterministic-Stochastic Subspace Identication 1

Departement Elektrotechniek ESAT-SISTA/TR About the choice of State Space Basis in Combined. Deterministic-Stochastic Subspace Identication 1 Katholieke Universiteit Leuven Departement Elektrotechniek ESAT-SISTA/TR 994-24 About the choice of State Space asis in Combined Deterministic-Stochastic Subspace Identication Peter Van Overschee and art

More information

System Identification

System Identification Technical report from Automatic Control at Linköpings universitet System Identification Lennart Ljung Division of Automatic Control E-mail: ljung@isy.liu.se 29th June 2007 Report no.: LiTH-ISY-R-2809 Accepted

More information

LTI Systems, Additive Noise, and Order Estimation

LTI Systems, Additive Noise, and Order Estimation LTI Systems, Additive oise, and Order Estimation Soosan Beheshti, Munther A. Dahleh Laboratory for Information and Decision Systems Department of Electrical Engineering and Computer Science Massachusetts

More information

On Indirect Input Measurements

On Indirect Input Measurements Technical report from Automatic Control at Linköpings universitet On Indirect Input Measurements Jonas Linder, Martin Enqvist Division of Automatic Control E-mail: jonas.linder@liu.se, maren@isy.liu.se

More information

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität

More information

Local Modelling with A Priori Known Bounds Using Direct Weight Optimization

Local Modelling with A Priori Known Bounds Using Direct Weight Optimization Local Modelling with A Priori Known Bounds Using Direct Weight Optimization Jacob Roll, Alexander azin, Lennart Ljung Division of Automatic Control Department of Electrical Engineering Linköpings universitet,

More information

Subspace-based Identification

Subspace-based Identification of Infinite-dimensional Multivariable Systems from Frequency-response Data Department of Electrical and Electronics Engineering Anadolu University, Eskişehir, Turkey October 12, 2008 Outline 1 2 3 4 Noise-free

More information

EL1820 Modeling of Dynamical Systems

EL1820 Modeling of Dynamical Systems EL1820 Modeling of Dynamical Systems Lecture 9 - Parameter estimation in linear models Model structures Parameter estimation via prediction error minimization Properties of the estimate: bias and variance

More information

A characterization of consistency of model weights given partial information in normal linear models

A characterization of consistency of model weights given partial information in normal linear models Statistics & Probability Letters ( ) A characterization of consistency of model weights given partial information in normal linear models Hubert Wong a;, Bertrand Clare b;1 a Department of Health Care

More information

computation of the algorithms it is useful to introduce some sort of mapping that reduces the dimension of the data set before applying signal process

computation of the algorithms it is useful to introduce some sort of mapping that reduces the dimension of the data set before applying signal process Optimal Dimension Reduction for Array Processing { Generalized Soren Anderson y and Arye Nehorai Department of Electrical Engineering Yale University New Haven, CT 06520 EDICS Category: 3.6, 3.8. Abstract

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

Lane departure detection for improved road geometry estimation

Lane departure detection for improved road geometry estimation Lane departure detection for improved road geometry estimation Thomas B. Schön, Andreas Eidehall, Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering Linköpings universitet,

More information

Notes on the matrix exponential

Notes on the matrix exponential Notes on the matrix exponential Erik Wahlén erik.wahlen@math.lu.se February 14, 212 1 Introduction The purpose of these notes is to describe how one can compute the matrix exponential e A when A is not

More information

Institutionen för systemteknik

Institutionen för systemteknik Institutionen för systemteknik Department of Electrical Engineering Examensarbete Multiple Platform Bias Error Estimation Examensarbete utfört i Reglerteknik vid Tekniska högskolan i Linköping av Åsa Wiklund

More information

CONTINUOUS TIME D=0 ZOH D 0 D=0 FOH D 0

CONTINUOUS TIME D=0 ZOH D 0 D=0 FOH D 0 IDENTIFICATION ASPECTS OF INTER- SAMPLE INPUT BEHAVIOR T. ANDERSSON, P. PUCAR and L. LJUNG University of Linkoping, Department of Electrical Engineering, S-581 83 Linkoping, Sweden Abstract. In this contribution

More information

Frequency Domain Versus Time Domain Methods in System Identification Revisited

Frequency Domain Versus Time Domain Methods in System Identification Revisited Technical report from Automatic Control at Linköpings universitet Frequency Domain Versus Time Domain Methods in System Identification Revisited Lennart Ljung Division of Automatic Control E-mail: ljung@isy.liu.se

More information

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon

Cover page. : On-line damage identication using model based orthonormal. functions. Author : Raymond A. de Callafon Cover page Title : On-line damage identication using model based orthonormal functions Author : Raymond A. de Callafon ABSTRACT In this paper, a new on-line damage identication method is proposed for monitoring

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Linear Approximations of Nonlinear FIR Systems for Separable Input Processes

Linear Approximations of Nonlinear FIR Systems for Separable Input Processes Linear Approximations of Nonlinear FIR Systems for Separable Input Processes Martin Enqvist, Lennart Ljung Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Recursive Least Squares and Accelerated Convergence in Stochastic Approximation Schemes Lennart Ljung Department of Electrical Engineering Linkping University, S-581 83 Linkping, Sweden WWW: http://www.control.isy.liu.se

More information

Accuracy Analysis of Time-domain Maximum Likelihood Method and Sample Maximum Likelihood Method for Errors-in-Variables Identification

Accuracy Analysis of Time-domain Maximum Likelihood Method and Sample Maximum Likelihood Method for Errors-in-Variables Identification Proceedings of the 17th World Congress The International Federation of Automatic Control Seoul, Korea, July 6-11, 8 Accuracy Analysis of Time-domain Maximum Likelihood Method and Sample Maximum Likelihood

More information

Outline. What Can Regularization Offer for Estimation of Dynamical Systems? State-of-the-Art System Identification

Outline. What Can Regularization Offer for Estimation of Dynamical Systems? State-of-the-Art System Identification Outline What Can Regularization Offer for Estimation of Dynamical Systems? with Tianshi Chen Preamble: The classic, conventional System Identification Setup Bias Variance, Model Size Selection Regularization

More information

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT Hans Norlander Systems and Control, Department of Information Technology Uppsala University P O Box 337 SE 75105 UPPSALA, Sweden HansNorlander@ituuse

More information

An LQ R weight selection approach to the discrete generalized H 2 control problem

An LQ R weight selection approach to the discrete generalized H 2 control problem INT. J. CONTROL, 1998, VOL. 71, NO. 1, 93± 11 An LQ R weight selection approach to the discrete generalized H 2 control problem D. A. WILSON², M. A. NEKOUI² and G. D. HALIKIAS² It is known that a generalized

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen CONROL SYSEMS, ROBOICS, AND AUOMAION - Vol. V - Recursive Algorithms - Han-Fu Chen RECURSIVE ALGORIHMS Han-Fu Chen Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy

More information

Position Estimation and Modeling of a Flexible Industrial Robot

Position Estimation and Modeling of a Flexible Industrial Robot Position Estimation and Modeling of a Flexible Industrial Robot Rickard Karlsson, Mikael Norrlöf, Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,

More information

Linear stochastic approximation driven by slowly varying Markov chains

Linear stochastic approximation driven by slowly varying Markov chains Available online at www.sciencedirect.com Systems & Control Letters 50 2003 95 102 www.elsevier.com/locate/sysconle Linear stochastic approximation driven by slowly varying Marov chains Viay R. Konda,

More information

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems Signal Processing S STOCKHOLM

ROYAL INSTITUTE OF TECHNOLOGY KUNGL TEKNISKA HÖGSKOLAN. Department of Signals, Sensors & Systems Signal Processing S STOCKHOLM Optimal Array Signal Processing in the Presence of oherent Wavefronts P. Stoica B. Ottersten M. Viberg December 1995 To appear in Proceedings ASSP{96 R-S3-SB-9529 ROYAL NSTTUTE OF TEHNOLOGY Department

More information

Implementation of the GIW-PHD filter

Implementation of the GIW-PHD filter Technical reort from Automatic Control at Linöings universitet Imlementation of the GIW-PHD filter Karl Granström, Umut Orguner Division of Automatic Control E-mail: arl@isy.liu.se, umut@isy.liu.se 28th

More information

Measure-Transformed Quasi Maximum Likelihood Estimation

Measure-Transformed Quasi Maximum Likelihood Estimation Measure-Transformed Quasi Maximum Likelihood Estimation 1 Koby Todros and Alfred O. Hero Abstract In this paper, we consider the problem of estimating a deterministic vector parameter when the likelihood

More information

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o

Chapter 7 Interconnected Systems and Feedback: Well-Posedness, Stability, and Performance 7. Introduction Feedback control is a powerful approach to o Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter 7 Interconnected

More information

Applied Mathematics Letters

Applied Mathematics Letters Applied Mathematics Letters 24 (2011) 797 802 Contents lists available at ScienceDirect Applied Mathematics Letters journal homepage: wwwelseviercom/locate/aml Model order determination using the Hankel

More information

Zeros and zero dynamics

Zeros and zero dynamics CHAPTER 4 Zeros and zero dynamics 41 Zero dynamics for SISO systems Consider a linear system defined by a strictly proper scalar transfer function that does not have any common zero and pole: g(s) =α p(s)

More information

12. Prediction Error Methods (PEM)

12. Prediction Error Methods (PEM) 12. Prediction Error Methods (PEM) EE531 (Semester II, 2010) description optimal prediction Kalman filter statistical results computational aspects 12-1 Description idea: determine the model parameter

More information

Tracking time-varying-coe$cient functions

Tracking time-varying-coe$cient functions INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING Tracking time-varying-coe$cient functions Henrik Aa. Nielsen*, Torben S. Nielsen, Alfred K. Joensen, Henrik Madsen, Jan Holst Department

More information

SIMPLE CONDITIONS FOR PRACTICAL STABILITY OF POSITIVE FRACTIONAL DISCRETE TIME LINEAR SYSTEMS

SIMPLE CONDITIONS FOR PRACTICAL STABILITY OF POSITIVE FRACTIONAL DISCRETE TIME LINEAR SYSTEMS Int. J. Appl. Math. Comput. Sci., 2009, Vol. 19, No. 2, 263 269 DOI: 10.2478/v10006-009-0022-6 SIMPLE CONDITIONS FOR PRACTICAL STABILITY OF POSITIVE FRACTIONAL DISCRETE TIME LINEAR SYSTEMS MIKOŁAJ BUSŁOWICZ,

More information

Department of Physics, Chemistry and Biology

Department of Physics, Chemistry and Biology Department of Physics, Chemistry and Biology Master s Thesis Quantum Chaos On A Curved Surface John Wärnå LiTH-IFM-A-EX-8/7-SE Department of Physics, Chemistry and Biology Linköpings universitet, SE-58

More information

A Mathematica Toolbox for Signals, Models and Identification

A Mathematica Toolbox for Signals, Models and Identification The International Federation of Automatic Control A Mathematica Toolbox for Signals, Models and Identification Håkan Hjalmarsson Jonas Sjöberg ACCESS Linnaeus Center, Electrical Engineering, KTH Royal

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Alan Edelman Department of Mathematics, Computer Science and AI Laboratories. E-mail: edelman@math.mit.edu N. Raj Rao Deparment

More information

Parameter Estimation in a Moving Horizon Perspective

Parameter Estimation in a Moving Horizon Perspective Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE

More information

AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET. Questions AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET

AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET. Questions AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET The Problem Identification of Linear and onlinear Dynamical Systems Theme : Curve Fitting Division of Automatic Control Linköping University Sweden Data from Gripen Questions How do the control surface

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Connexions module: m11446 1 Maximum Likelihood Estimation Clayton Scott Robert Nowak This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License Abstract

More information

On some interpolation problems

On some interpolation problems On some interpolation problems A. Gombani Gy. Michaletzky LADSEB-CNR Eötvös Loránd University Corso Stati Uniti 4 H-1111 Pázmány Péter sétány 1/C, 35127 Padova, Italy Computer and Automation Institute

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case

Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case 008 American Control Conference Westin Seattle Hotel Seattle Washington USA June -3 008 WeB8.4 Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case

More information

Robust Multivariable Control

Robust Multivariable Control Lecture 2 Anders Helmersson anders.helmersson@liu.se ISY/Reglerteknik Linköpings universitet Today s topics Today s topics Norms Today s topics Norms Representation of dynamic systems Today s topics Norms

More information

2 Chapter 1 A nonlinear black box structure for a dynamical system is a model structure that is prepared to describe virtually any nonlinear dynamics.

2 Chapter 1 A nonlinear black box structure for a dynamical system is a model structure that is prepared to describe virtually any nonlinear dynamics. 1 SOME ASPECTS O OLIEAR BLACK-BOX MODELIG I SYSTEM IDETIFICATIO Lennart Ljung Dept of Electrical Engineering, Linkoping University, Sweden, ljung@isy.liu.se 1 ITRODUCTIO The key problem in system identication

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

Norm invariant discretization for sampled-data fault detection

Norm invariant discretization for sampled-data fault detection Automatica 41 (25 1633 1637 www.elsevier.com/locate/automatica Technical communique Norm invariant discretization for sampled-data fault detection Iman Izadi, Tongwen Chen, Qing Zhao Department of Electrical

More information

Chapter 6: Nonparametric Time- and Frequency-Domain Methods. Problems presented by Uwe

Chapter 6: Nonparametric Time- and Frequency-Domain Methods. Problems presented by Uwe System Identification written by L. Ljung, Prentice Hall PTR, 1999 Chapter 6: Nonparametric Time- and Frequency-Domain Methods Problems presented by Uwe System Identification Problems Chapter 6 p. 1/33

More information

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS M Gevers,1 L Mišković,2 D Bonvin A Karimi Center for Systems Engineering and Applied Mechanics (CESAME) Université Catholique de Louvain B-1348 Louvain-la-Neuve,

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

Eigenvalue problems and optimization

Eigenvalue problems and optimization Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Lifting to non-integral idempotents

Lifting to non-integral idempotents Journal of Pure and Applied Algebra 162 (2001) 359 366 www.elsevier.com/locate/jpaa Lifting to non-integral idempotents Georey R. Robinson School of Mathematics and Statistics, University of Birmingham,

More information

Outline lecture 2 2(30)

Outline lecture 2 2(30) Outline lecture 2 2(3), Lecture 2 Linear Regression it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic Control

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model

Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model BULGARIAN ACADEMY OF SCIENCES CYBERNEICS AND INFORMAION ECHNOLOGIES Volume No Sofia Algorithm for Multiple Model Adaptive Control Based on Input-Output Plant Model sonyo Slavov Department of Automatics

More information

Influence of Laser Radar Sensor Parameters on Range Measurement and Shape Fitting Uncertainties

Influence of Laser Radar Sensor Parameters on Range Measurement and Shape Fitting Uncertainties Influence of Laser Radar Sensor Parameters on Range Measurement and Shape Fitting Uncertainties Christina Grönwall, Ove Steinvall, Fredrik Gustafsson, Tomas Chevalier Division of Automatic Control Department

More information